Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956309
Minja Axelsson, M. Racca, Daryl Weir, V. Kyrki
We present the participatory design process of a robotic tutor of assistive sign language for children with autism spectrum disorder (ASD). Robots have been used in autism therapy, and to teach sign language to neurotypical children. The application of teaching assistive sign language — the most common form of assistive and augmentative communication used by people with ASD — is novel. The robot’s function is to prompt children to imitate the assistive signs that it performs. The robot was therefore co-designed to appeal to children with ASD, taking into account the characteristics of ASD during the design process: impaired language and communication, impaired social behavior, and narrow flexibility in daily activities. To accommodate these characteristics, a multidisciplinary team defined design guidelines specific to robots for children with ASD, which were followed in the participatory design process. With a pilot study where the robot prompted children to imitate nine assistive signs, we found support for the effectiveness of the design. The children successfully imitated the robot and kept their focus on it, as measured by their eye gaze. Children and their companions reported positive experiences with the robot, and companions evaluated it as potentially useful, suggesting that robotic devices could be used to teach assistive sign language to children with ASD.
{"title":"A Participatory Design Process of a Robotic Tutor of Assistive Sign Language for Children with Autism","authors":"Minja Axelsson, M. Racca, Daryl Weir, V. Kyrki","doi":"10.1109/RO-MAN46459.2019.8956309","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956309","url":null,"abstract":"We present the participatory design process of a robotic tutor of assistive sign language for children with autism spectrum disorder (ASD). Robots have been used in autism therapy, and to teach sign language to neurotypical children. The application of teaching assistive sign language — the most common form of assistive and augmentative communication used by people with ASD — is novel. The robot’s function is to prompt children to imitate the assistive signs that it performs. The robot was therefore co-designed to appeal to children with ASD, taking into account the characteristics of ASD during the design process: impaired language and communication, impaired social behavior, and narrow flexibility in daily activities. To accommodate these characteristics, a multidisciplinary team defined design guidelines specific to robots for children with ASD, which were followed in the participatory design process. With a pilot study where the robot prompted children to imitate nine assistive signs, we found support for the effectiveness of the design. The children successfully imitated the robot and kept their focus on it, as measured by their eye gaze. Children and their companions reported positive experiences with the robot, and companions evaluated it as potentially useful, suggesting that robotic devices could be used to teach assistive sign language to children with ASD.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134624455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956298
Mojgan Hashemian, Ana Paiva, S. Mascarenhas, P. A. Santos, R. Prada
Recent advances on Social Robotics raise the question whether a social robot can be used as a persuasive agent. To date, a body of literature has been performed using various approaches to answer this research question, ranging from the use of non-verbal behavior to the exploration of different embodiment characteristics. In this paper, we investigate the role of social power for making social robots more persuasive. Social power is defined as one’s ability to influence another to do something which s/he would not do without the presence of such power. Different theories classify alternative ways to achieve social power, such as providing a reward, using coercion, or acting as an expert. In this work, we explored two types of persuasive strategies that are based on social power (specifically Reward and Expertise) and created two social robots that would employ such strategies. To examine the effectiveness of these strategies we performed a user study with 51 participants using two social robots in an adversarial setting in which both robots try to persuade the user on a concrete choice. The results show that even though each of the strategies caused the robots to be perceived differently in terms of their competence and warmth, both were similarly persuasive.
{"title":"The Power to Persuade: a study of Social Power in Human-Robot Interaction","authors":"Mojgan Hashemian, Ana Paiva, S. Mascarenhas, P. A. Santos, R. Prada","doi":"10.1109/RO-MAN46459.2019.8956298","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956298","url":null,"abstract":"Recent advances on Social Robotics raise the question whether a social robot can be used as a persuasive agent. To date, a body of literature has been performed using various approaches to answer this research question, ranging from the use of non-verbal behavior to the exploration of different embodiment characteristics. In this paper, we investigate the role of social power for making social robots more persuasive. Social power is defined as one’s ability to influence another to do something which s/he would not do without the presence of such power. Different theories classify alternative ways to achieve social power, such as providing a reward, using coercion, or acting as an expert. In this work, we explored two types of persuasive strategies that are based on social power (specifically Reward and Expertise) and created two social robots that would employ such strategies. To examine the effectiveness of these strategies we performed a user study with 51 participants using two social robots in an adversarial setting in which both robots try to persuade the user on a concrete choice. The results show that even though each of the strategies caused the robots to be perceived differently in terms of their competence and warmth, both were similarly persuasive.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131649182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956406
P. N. Jyotish, Yash Goel, A. V. S. S. B. Kumar, K. Krishna
In this paper, we present an algorithmic framework which computes the collision-free velocities for the robot in a human shared dynamic and uncertain environment. We extend the concept of Inverse Velocity Obstacle (IVO) to a probabilistic variant to handle the state estimation and motion uncertainties that arise due to the other participants of the environment. These uncertainties are modeled as non-parametric probability distributions. In our PIVO: Probabilistic Inverse Velocity Obstacle, we propose the collision-free navigation as an optimization problem by reformulating the velocity conditions of IVO as chance constraints that takes the uncertainty into account. The space of collision-free velocities that result from the presented optimization scheme are associated to a confidence measure as a specified probability. We demonstrate the efficacy of our PIVO through numerical simulations and demonstrating its ability to generate safe trajectories under highly uncertain environments.
{"title":"PIVO: Probabilistic Inverse Velocity Obstacle for Navigation under Uncertainty","authors":"P. N. Jyotish, Yash Goel, A. V. S. S. B. Kumar, K. Krishna","doi":"10.1109/RO-MAN46459.2019.8956406","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956406","url":null,"abstract":"In this paper, we present an algorithmic framework which computes the collision-free velocities for the robot in a human shared dynamic and uncertain environment. We extend the concept of Inverse Velocity Obstacle (IVO) to a probabilistic variant to handle the state estimation and motion uncertainties that arise due to the other participants of the environment. These uncertainties are modeled as non-parametric probability distributions. In our PIVO: Probabilistic Inverse Velocity Obstacle, we propose the collision-free navigation as an optimization problem by reformulating the velocity conditions of IVO as chance constraints that takes the uncertainty into account. The space of collision-free velocities that result from the presented optimization scheme are associated to a confidence measure as a specified probability. We demonstrate the efficacy of our PIVO through numerical simulations and demonstrating its ability to generate safe trajectories under highly uncertain environments.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133410750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956426
Rahul Tallamraju, V. Sripada, S. Shah
In this paper, the problem of path planning through tight spaces, for the task of spatial payload transportation, using a formation of mobile manipulators is addressed. Due to the high dimensional configuration space of the system, efficient and geometrically stable path planning through tight spaces is challenging. We resolve this by planning the path for the system in two phases. First, an obstacle-free trajectory in $mathbb{R}^{3}$ for the payload being transported is determined using RRT. Next, near-energy optimal and quasi-statically stable paths are planned for the formation of robots along this trajectory using non-linear multi-objective optimization. We validate the proposed approach in simulation experiments and compare different multi-objective optimization algorithms to find energy optimal and geometrically stable robot path plans.
{"title":"Path Planning through Tight Spaces for Payload Transportation using Multiple Mobile Manipulators","authors":"Rahul Tallamraju, V. Sripada, S. Shah","doi":"10.1109/RO-MAN46459.2019.8956426","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956426","url":null,"abstract":"In this paper, the problem of path planning through tight spaces, for the task of spatial payload transportation, using a formation of mobile manipulators is addressed. Due to the high dimensional configuration space of the system, efficient and geometrically stable path planning through tight spaces is challenging. We resolve this by planning the path for the system in two phases. First, an obstacle-free trajectory in $mathbb{R}^{3}$ for the payload being transported is determined using RRT. Next, near-energy optimal and quasi-statically stable paths are planned for the formation of robots along this trajectory using non-linear multi-objective optimization. We validate the proposed approach in simulation experiments and compare different multi-objective optimization algorithms to find energy optimal and geometrically stable robot path plans.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129498138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956457
Luca Buoncompagni, F. Mastrogiovanni
This paper introduces a technique to teach robots how to represent and qualitatively interpret perceived scenes in tabletop scenarios. To this aim, we envisage a 3-step human-robot interaction process, in which $(i)$ a human shows a scene to a robot, $(ii)$ the robot memorises a symbolic scene representation (in terms of objects and their spatial arrangement), and (iii) the human can revise such a representation, if necessary, by further interacting with the robot; here, we focus on steps i and ii. Scene classification occurs at a symbolic level, using ontology-based instance checking and subsumption algorithms. Experiments showcase the main properties of the approach, i.e., detecting whether a new scene belongs to a scene class already represented by the robot, or otherwise creating a new representation with a one shot learning approach, and correlating scenes from a qualitative standpoint to detect similarities and differences in order to build a scene hierarchy.
{"title":"Teaching a Robot how to Spatially Arrange Objects: Representation and Recognition Issues","authors":"Luca Buoncompagni, F. Mastrogiovanni","doi":"10.1109/RO-MAN46459.2019.8956457","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956457","url":null,"abstract":"This paper introduces a technique to teach robots how to represent and qualitatively interpret perceived scenes in tabletop scenarios. To this aim, we envisage a 3-step human-robot interaction process, in which $(i)$ a human shows a scene to a robot, $(ii)$ the robot memorises a symbolic scene representation (in terms of objects and their spatial arrangement), and (iii) the human can revise such a representation, if necessary, by further interacting with the robot; here, we focus on steps i and ii. Scene classification occurs at a symbolic level, using ontology-based instance checking and subsumption algorithms. Experiments showcase the main properties of the approach, i.e., detecting whether a new scene belongs to a scene class already represented by the robot, or otherwise creating a new representation with a one shot learning approach, and correlating scenes from a qualitative standpoint to detect similarities and differences in order to build a scene hierarchy.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131100343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956314
Dhaivat Bhatt, Akash Garg, Bharath Gopalakrishnan, K. Krishna
Autonomous navigation and obstacle avoidance are core capabilities that enable robots to execute tasks in the real world. We propose a new approach to collision avoidance that accounts for uncertainty in the states of the agent and the obstacles. We first demonstrate that measures of entropy— used in current approaches for uncertainty-aware obstacle avoidance—are an inappropriate design choice. We then propose an algorithm that solves an optimal control sequence with a guaranteed risk bound, using a measure of overlap between the two distributions that represent the state of the robot and the obstacle, respectively. Furthermore, we provide closed form expressions that can characterize the overlap as a function of the control input. The proposed approach enables model-predictive control framework to generate bounded-confidence control commands. An extensive set of simulations have been conducted in various constrained environments in order to demonstrate the efficacy of the proposed approach over the prior art. We demonstrate the usefulness of the proposed scheme under tight spaces where computing risk-sensitive control maneuvers is vital. We also show how this framework generalizes to other problems, such as object-following.
{"title":"Probabilistic obstacle avoidance and object following: An overlap of Gaussians approach","authors":"Dhaivat Bhatt, Akash Garg, Bharath Gopalakrishnan, K. Krishna","doi":"10.1109/RO-MAN46459.2019.8956314","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956314","url":null,"abstract":"Autonomous navigation and obstacle avoidance are core capabilities that enable robots to execute tasks in the real world. We propose a new approach to collision avoidance that accounts for uncertainty in the states of the agent and the obstacles. We first demonstrate that measures of entropy— used in current approaches for uncertainty-aware obstacle avoidance—are an inappropriate design choice. We then propose an algorithm that solves an optimal control sequence with a guaranteed risk bound, using a measure of overlap between the two distributions that represent the state of the robot and the obstacle, respectively. Furthermore, we provide closed form expressions that can characterize the overlap as a function of the control input. The proposed approach enables model-predictive control framework to generate bounded-confidence control commands. An extensive set of simulations have been conducted in various constrained environments in order to demonstrate the efficacy of the proposed approach over the prior art. We demonstrate the usefulness of the proposed scheme under tight spaces where computing risk-sensitive control maneuvers is vital. We also show how this framework generalizes to other problems, such as object-following.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123977294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956390
Alexandros Rotsidis, Andreas Theodorou, J. Bryson, Robert H. Wortham
Autonomous robots can be difficult to understand by their developers, let alone by end users. Yet, as they become increasingly integral parts of our societies, the need for affordable easy to use tools to provide transparency grows. The rise of the smartphone and the improvements in mobile computing performance have gradually allowed Augmented Reality (AR) to become more mobile and affordable. In this paper we review relevant robot systems architecture and propose a new software tool to provide robot transparency through the use of AR technology. Our new tool, ABOD3-AR provides real-time graphical visualisation and debugging of a robot’s goals and priorities as a means for both designers and end users to gain a better mental model of the internal state and decision making processes taking place within a robot. We also report on our on-going research programme and planned studies to further understand the effects of transparency to naive users and experts.
{"title":"Improving Robot Transparency: An Investigation With Mobile Augmented Reality","authors":"Alexandros Rotsidis, Andreas Theodorou, J. Bryson, Robert H. Wortham","doi":"10.1109/RO-MAN46459.2019.8956390","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956390","url":null,"abstract":"Autonomous robots can be difficult to understand by their developers, let alone by end users. Yet, as they become increasingly integral parts of our societies, the need for affordable easy to use tools to provide transparency grows. The rise of the smartphone and the improvements in mobile computing performance have gradually allowed Augmented Reality (AR) to become more mobile and affordable. In this paper we review relevant robot systems architecture and propose a new software tool to provide robot transparency through the use of AR technology. Our new tool, ABOD3-AR provides real-time graphical visualisation and debugging of a robot’s goals and priorities as a means for both designers and end users to gain a better mental model of the internal state and decision making processes taking place within a robot. We also report on our on-going research programme and planned studies to further understand the effects of transparency to naive users and experts.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114482626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956429
Dhaminda B. Abeywickrama, C. Cîrstea, S. Ramchurn
Humans and agents often need to work together and agree on collective decisions. Ensuring that autonomous systems work responsibly is complex especially when encountering dilemmas. This paper proposes a novel, systematic model checking approach to responsible decision making by a human-agent collective to ensure it is safe, controllable and ethical. Our approach, which is based on the MCMAS model checker, verifies the permissibility of an agent’s actions by checking the decision-making behaviour against the logical formulae specified for safety, controllability and ethical behaviour. The verification results through counterexamples and simulation results can provide a judgement, and an explanation to the AI engineer of the reasons actions are refused or allowed.
{"title":"Model Checking Human-Agent Collectives for Responsible AI","authors":"Dhaminda B. Abeywickrama, C. Cîrstea, S. Ramchurn","doi":"10.1109/RO-MAN46459.2019.8956429","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956429","url":null,"abstract":"Humans and agents often need to work together and agree on collective decisions. Ensuring that autonomous systems work responsibly is complex especially when encountering dilemmas. This paper proposes a novel, systematic model checking approach to responsible decision making by a human-agent collective to ensure it is safe, controllable and ethical. Our approach, which is based on the MCMAS model checker, verifies the permissibility of an agent’s actions by checking the decision-making behaviour against the logical formulae specified for safety, controllability and ethical behaviour. The verification results through counterexamples and simulation results can provide a judgement, and an explanation to the AI engineer of the reasons actions are refused or allowed.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114769409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956367
A. Deshmukh, Sooraj K. Babu, R. Unnikrishnan, S. Ramesh, P. Anitha, R. R. Bhavani
The work presented in this paper reports the influence of a social robot on hand washing behaviour on school children in rural India with a significant presence of indigenous tribes. We describe the design choices of our social robot to cater the requirements of the intervention. The custom built wall mounted social robot encouraged 100 children to wash their hand at appropriate time (before meal and after toilet) using the correct handwashing technique via a poster on a wall. The results indicate that the intervention using the robot was found to be effective (40% rise) at increasing levels of hand washing with soap and with a better handwashing technique in ecologically valid settings.
{"title":"Influencing Hand-washing Behaviour With a Social Robot: HRI Study With School Children in Rural India","authors":"A. Deshmukh, Sooraj K. Babu, R. Unnikrishnan, S. Ramesh, P. Anitha, R. R. Bhavani","doi":"10.1109/RO-MAN46459.2019.8956367","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956367","url":null,"abstract":"The work presented in this paper reports the influence of a social robot on hand washing behaviour on school children in rural India with a significant presence of indigenous tribes. We describe the design choices of our social robot to cater the requirements of the intervention. The custom built wall mounted social robot encouraged 100 children to wash their hand at appropriate time (before meal and after toilet) using the correct handwashing technique via a poster on a wall. The results indicate that the intervention using the robot was found to be effective (40% rise) at increasing levels of hand washing with soap and with a better handwashing technique in ecologically valid settings.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117013545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956246
Gonzalo Mier, F. Caballero, Keisuke Nakamura, L. Merino, R. Gomez
Motion is an important modality for human-robot interaction. Besides a fundamental component to carry out tasks, through motion a robot can express intentions and expressions as well. In this paper, we focus on a tabletop robot in which motion, among other modalities, is used to convey expressions. The robot incorporates a set of pre-programmed motion animations that show different expressions with various intensities. These have been created by designers with expertise in animation. The objective in the paper is to analyze if these examples can be used as demonstrations, and combined by the robot to generate additional richer expressions. Challenges are the representation space used, and the scarce number of examples. The paper compares three different learning from demonstration approaches for the task at hand. A user study is presented to evaluate the resultant new expressive motions automatically generated by combining previous demonstrations.
{"title":"Generation of expressive motions for a tabletop robot interpolating from hand-made animations","authors":"Gonzalo Mier, F. Caballero, Keisuke Nakamura, L. Merino, R. Gomez","doi":"10.1109/RO-MAN46459.2019.8956246","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956246","url":null,"abstract":"Motion is an important modality for human-robot interaction. Besides a fundamental component to carry out tasks, through motion a robot can express intentions and expressions as well. In this paper, we focus on a tabletop robot in which motion, among other modalities, is used to convey expressions. The robot incorporates a set of pre-programmed motion animations that show different expressions with various intensities. These have been created by designers with expertise in animation. The objective in the paper is to analyze if these examples can be used as demonstrations, and combined by the robot to generate additional richer expressions. Challenges are the representation space used, and the scarce number of examples. The paper compares three different learning from demonstration approaches for the task at hand. A user study is presented to evaluate the resultant new expressive motions automatically generated by combining previous demonstrations.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125228243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}