Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223582
Matthijs H. J. Smakman, J. Berket, E. Konijn
Social robots are increasingly studied and applied in the educational domain. Although they hold great potential for education, they also bring new moral challenges. In this study, we explored the moral considerations related to social robots from the perspective of Dutch educational policymakers by first identifying opportunities and concerns and then mapping them onto (moral) values from the literature. To explore their moral considerations, we conducted focus group sessions with Dutch Educational Policymakers (N = 20). Considerations varied from the potential to lower the workload of teachers, to concerns related to the increased influence of commercial enterprises on the educational system. In total, the considerations of the policymakers related to 15 theoretical values. We identified the moral considerations of educational policymakers to gain a better understanding of a governmental attitude towards the use of social robots. This helps to create the necessary moral guidelines towards a responsible implementation of social robots in education.
{"title":"The Impact of Social Robots in Education: Moral Considerations of Dutch Educational Policymakers","authors":"Matthijs H. J. Smakman, J. Berket, E. Konijn","doi":"10.1109/RO-MAN47096.2020.9223582","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223582","url":null,"abstract":"Social robots are increasingly studied and applied in the educational domain. Although they hold great potential for education, they also bring new moral challenges. In this study, we explored the moral considerations related to social robots from the perspective of Dutch educational policymakers by first identifying opportunities and concerns and then mapping them onto (moral) values from the literature. To explore their moral considerations, we conducted focus group sessions with Dutch Educational Policymakers (N = 20). Considerations varied from the potential to lower the workload of teachers, to concerns related to the increased influence of commercial enterprises on the educational system. In total, the considerations of the policymakers related to 15 theoretical values. We identified the moral considerations of educational policymakers to gain a better understanding of a governmental attitude towards the use of social robots. This helps to create the necessary moral guidelines towards a responsible implementation of social robots in education.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122568304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223539
Hasan Kivrak, Pinar Uluer, Hatice Kose, E. Gümüslü, D. Erol, Furkan Çakmak, S. Yavuz
The aim of this work is to create a social navigation system for an affective robot that acts as an assistant in the audiology department of hospitals for children with hearing impairments. Compared to traditional navigation systems, this system differentiates between objects and human beings and optimizes several parameters to keep at a social distance during motion when faced with humans not to interfere with their personal zones. For this purpose, social robot motion planning algorithms are employed to generate human-friendly paths that maintain humans’ safety and comfort during the robot’s navigation. This paper evaluates this system compared to traditional navigation, based on the surveys and physiological data of the adult participants in a preliminary study before using the system with children. Although the self-report questionnaires do not show any significant difference between navigation profiles of the robot, analysis of the physiological data may be interpreted that, the participants felt comfortable and less threatened in social navigation case.
{"title":"Physiological Data-Based Evaluation of a Social Robot Navigation System","authors":"Hasan Kivrak, Pinar Uluer, Hatice Kose, E. Gümüslü, D. Erol, Furkan Çakmak, S. Yavuz","doi":"10.1109/RO-MAN47096.2020.9223539","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223539","url":null,"abstract":"The aim of this work is to create a social navigation system for an affective robot that acts as an assistant in the audiology department of hospitals for children with hearing impairments. Compared to traditional navigation systems, this system differentiates between objects and human beings and optimizes several parameters to keep at a social distance during motion when faced with humans not to interfere with their personal zones. For this purpose, social robot motion planning algorithms are employed to generate human-friendly paths that maintain humans’ safety and comfort during the robot’s navigation. This paper evaluates this system compared to traditional navigation, based on the surveys and physiological data of the adult participants in a preliminary study before using the system with children. Although the self-report questionnaires do not show any significant difference between navigation profiles of the robot, analysis of the physiological data may be interpreted that, the participants felt comfortable and less threatened in social navigation case.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"9 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129924896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223489
S. A. Arboleda, Max Pascher, Younes Lakhnati, J. Gerken
Assistive technologies such as human-robot collaboration, have the potential to ease the life of people with physical mobility impairments in social and economic activities. Currently, this group of people has lower rates of economic participation, due to the lack of adequate environments adapted to their capabilities. We take a closer look at the needs and preferences of people with physical mobility impairments in a human-robot cooperative environment at the workplace. Specifically, we aim to design how to control a robotic arm in manufacturing tasks for people with physical mobility impairments. We present a case study of a sheltered-workshop as a prototype for an institution that employs people with disabilities in manufacturing jobs. Here, we collected data of potential end-users with physical mobility impairments, social workers, and supervisors using a participatory design technique (Future-Workshop). These stakeholders were divided into two groups, primary (end-users) and secondary users (social workers, supervisors), which were run across two separate sessions. The gathered information was analyzed using thematic analysis to reveal underlying themes across stakeholders. We identified concepts that highlight underlying concerns related to the robot fitting in the social and organizational structure, human-robot synergy, and human-robot problem management. In this paper, we present our findings and discuss the implications of each theme when shaping an inclusive human-robot cooperative workstation for people with physical mobility impairments.
{"title":"Understanding Human-Robot Collaboration for People with Mobility Impairments at the Workplace, a Thematic Analysis","authors":"S. A. Arboleda, Max Pascher, Younes Lakhnati, J. Gerken","doi":"10.1109/RO-MAN47096.2020.9223489","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223489","url":null,"abstract":"Assistive technologies such as human-robot collaboration, have the potential to ease the life of people with physical mobility impairments in social and economic activities. Currently, this group of people has lower rates of economic participation, due to the lack of adequate environments adapted to their capabilities. We take a closer look at the needs and preferences of people with physical mobility impairments in a human-robot cooperative environment at the workplace. Specifically, we aim to design how to control a robotic arm in manufacturing tasks for people with physical mobility impairments. We present a case study of a sheltered-workshop as a prototype for an institution that employs people with disabilities in manufacturing jobs. Here, we collected data of potential end-users with physical mobility impairments, social workers, and supervisors using a participatory design technique (Future-Workshop). These stakeholders were divided into two groups, primary (end-users) and secondary users (social workers, supervisors), which were run across two separate sessions. The gathered information was analyzed using thematic analysis to reveal underlying themes across stakeholders. We identified concepts that highlight underlying concerns related to the robot fitting in the social and organizational structure, human-robot synergy, and human-robot problem management. In this paper, we present our findings and discuss the implications of each theme when shaping an inclusive human-robot cooperative workstation for people with physical mobility impairments.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122900380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223551
Wenxuan Mou, Martina Ruocco, Debora Zanatto, A. Cangelosi
Trust is a critical issue in human-robot interactions (HRI) as it is the core of human desire to accept and use a non-human agent. Theory of Mind (ToM) has been defined as the ability to understand the beliefs and intentions of others that may differ from one’s own. Evidences in psychology and HRI suggest that trust and ToM are interconnected and interdependent concepts, as the decision to trust another agent must depend on our own representation of this entity’s actions, beliefs and intentions. However, very few works take ToM of the robot into consideration while studying trust in HRI. In this paper, we investigated whether the exposure to the ToM abilities of a robot could affect humans’ trust towards the robot. To this end, participants played a Price Game with a humanoid robot (Pepper) that was presented having either low-level ToM or high-level ToM. Specifically, the participants were asked to accept the price evaluations on common objects presented by the robot. The willingness of the participants to change their own price judgement of the objects (i.e., accept the price the robot suggested) was used as the main measurement of the trust towards the robot. Our experimental results showed that robots possessing a high-level of ToM abilities were trusted more than the robots presented with low-level ToM skills.
{"title":"When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interactions","authors":"Wenxuan Mou, Martina Ruocco, Debora Zanatto, A. Cangelosi","doi":"10.1109/RO-MAN47096.2020.9223551","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223551","url":null,"abstract":"Trust is a critical issue in human-robot interactions (HRI) as it is the core of human desire to accept and use a non-human agent. Theory of Mind (ToM) has been defined as the ability to understand the beliefs and intentions of others that may differ from one’s own. Evidences in psychology and HRI suggest that trust and ToM are interconnected and interdependent concepts, as the decision to trust another agent must depend on our own representation of this entity’s actions, beliefs and intentions. However, very few works take ToM of the robot into consideration while studying trust in HRI. In this paper, we investigated whether the exposure to the ToM abilities of a robot could affect humans’ trust towards the robot. To this end, participants played a Price Game with a humanoid robot (Pepper) that was presented having either low-level ToM or high-level ToM. Specifically, the participants were asked to accept the price evaluations on common objects presented by the robot. The willingness of the participants to change their own price judgement of the objects (i.e., accept the price the robot suggested) was used as the main measurement of the trust towards the robot. Our experimental results showed that robots possessing a high-level of ToM abilities were trusted more than the robots presented with low-level ToM skills.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127747192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223525
Y. Otsuka, Shohei Akita, Kohei Okuoka, Mitsuhiko Kimoto, M. Imai
With digital signage and communication robots, digital agents have gradually become popular and will become more popular. It is important to make humans notice the intentions of agents throughout the interaction between them. This paper is focused on the gaze behavior of an agent and the phenomenon that if the gaze behavior of an agent is different from human expectations, human will have a incongruity and feel the existence of the agent’s intention behind the behavioral changes instinctively. We propose PredGaze, a model of estimating this incongruity which humans have according to the shift in gaze behavior from the human’s expectations. In particular, PredGaze uses the variance in the agent behavior model to express how well humans sense the behavioral tendency of the agent. We expect that this variance will improve the estimation of the incongruity. PredGaze uses three variables to estimate the internal state of how much a human senses the agent’s intention: error, confidence, and incongruity. To evaluate the effectiveness of PredGaze with these three variables, we conducted an experiment to investigate the effects of the timing of gaze behavior change and incongruity. The experimental results indicated that there were significant differences in the subjective scores of the naturalness of agents and incongruity with agents according to the difference in the timing of the agent’s change in its gaze behavior.
{"title":"PredGaze: A Incongruity Prediction Model for User’s Gaze Movement","authors":"Y. Otsuka, Shohei Akita, Kohei Okuoka, Mitsuhiko Kimoto, M. Imai","doi":"10.1109/RO-MAN47096.2020.9223525","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223525","url":null,"abstract":"With digital signage and communication robots, digital agents have gradually become popular and will become more popular. It is important to make humans notice the intentions of agents throughout the interaction between them. This paper is focused on the gaze behavior of an agent and the phenomenon that if the gaze behavior of an agent is different from human expectations, human will have a incongruity and feel the existence of the agent’s intention behind the behavioral changes instinctively. We propose PredGaze, a model of estimating this incongruity which humans have according to the shift in gaze behavior from the human’s expectations. In particular, PredGaze uses the variance in the agent behavior model to express how well humans sense the behavioral tendency of the agent. We expect that this variance will improve the estimation of the incongruity. PredGaze uses three variables to estimate the internal state of how much a human senses the agent’s intention: error, confidence, and incongruity. To evaluate the effectiveness of PredGaze with these three variables, we conducted an experiment to investigate the effects of the timing of gaze behavior change and incongruity. The experimental results indicated that there were significant differences in the subjective scores of the naturalness of agents and incongruity with agents according to the difference in the timing of the agent’s change in its gaze behavior.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121365993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223459
Pooja Prajod, K. Hindriks
Emotion expression is an important part of human-robot interaction. Previous studies typically focused on a small set of emotions and a single channel to express them. We developed an emotion expression model that modulates motion, poses and LED features parametrically, using valence and arousal values. This model does not interrupt the task or gesture being performed and hence can be used in combination with functional behavioural expressions. Even though our model is relatively simple, it is just as capable of expressing emotions as other more complicated models that have been proposed in the literature. We systematically explored the expressivity of our model and found that a parametric model using 5 key motion and pose features can be used to effectively express emotions in the two quadrants where valence and arousal have the same sign. As paradigmatic examples, we tested for happy, excited, sad and tired. By adding a second channel (eye LEDs), the model is also able to express high arousal (anger) and low arousal (relaxed) emotions in the two other quadrants. Our work supports other findings that it remains hard to express moderate arousal emotions in these quadrants for both negative (fear) and positive (content) valence.
{"title":"On the Expressivity of a Parametric Humanoid Emotion Model","authors":"Pooja Prajod, K. Hindriks","doi":"10.1109/RO-MAN47096.2020.9223459","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223459","url":null,"abstract":"Emotion expression is an important part of human-robot interaction. Previous studies typically focused on a small set of emotions and a single channel to express them. We developed an emotion expression model that modulates motion, poses and LED features parametrically, using valence and arousal values. This model does not interrupt the task or gesture being performed and hence can be used in combination with functional behavioural expressions. Even though our model is relatively simple, it is just as capable of expressing emotions as other more complicated models that have been proposed in the literature. We systematically explored the expressivity of our model and found that a parametric model using 5 key motion and pose features can be used to effectively express emotions in the two quadrants where valence and arousal have the same sign. As paradigmatic examples, we tested for happy, excited, sad and tired. By adding a second channel (eye LEDs), the model is also able to express high arousal (anger) and low arousal (relaxed) emotions in the two other quadrants. Our work supports other findings that it remains hard to express moderate arousal emotions in these quadrants for both negative (fear) and positive (content) valence.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128616717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223445
Bukeikhan Omarali, Brice D. Denoun, K. Althoefer, L. Jamone, Maurizio Valle, I. Farkhatdinov
This work describes a virtual reality (VR) based robot teleoperation framework which relies on scene visualization from depth cameras and implements human-robot and human-scene interaction gestures. We suggest that mounting a camera on a slave robot’s end-effector (an in-hand camera) allows the operator to achieve better visualization of the remote scene and improve task performance. We compared experimentally the operator’s ability to understand the remote environment in different visualization modes: single external static camera, in-hand camera, in-hand and external static camera, in-hand camera with OctoMap occupancy mapping. The latter option provided the operator with a better understanding of the remote environment whilst requiring relatively small communication bandwidth. Consequently, we propose suitable grasping methods compatible with the VR based teleoperation with the in-hand camera. Video demonstration: https://youtu.be/3vZaEykMS_E.
{"title":"Virtual Reality based Telerobotics Framework with Depth Cameras","authors":"Bukeikhan Omarali, Brice D. Denoun, K. Althoefer, L. Jamone, Maurizio Valle, I. Farkhatdinov","doi":"10.1109/RO-MAN47096.2020.9223445","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223445","url":null,"abstract":"This work describes a virtual reality (VR) based robot teleoperation framework which relies on scene visualization from depth cameras and implements human-robot and human-scene interaction gestures. We suggest that mounting a camera on a slave robot’s end-effector (an in-hand camera) allows the operator to achieve better visualization of the remote scene and improve task performance. We compared experimentally the operator’s ability to understand the remote environment in different visualization modes: single external static camera, in-hand camera, in-hand and external static camera, in-hand camera with OctoMap occupancy mapping. The latter option provided the operator with a better understanding of the remote environment whilst requiring relatively small communication bandwidth. Consequently, we propose suitable grasping methods compatible with the VR based teleoperation with the in-hand camera. Video demonstration: https://youtu.be/3vZaEykMS_E.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128826603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223605
Riccardo De Benedictis, A. Umbrico, Francesca Fracasso, Gabriella Cortellessa, Andrea Orlandini, A. Cesta
Socially assistive robots should provide users with personalized assistance within a wide range of scenarios such as hospitals, home or social settings and private houses. Different people may have different needs both at the cognitive/physical support level and in relation to the preferences of interaction. Consequently the typology of tasks and the way the assistance is delivered can change according to the person with whom the robot is interacting. The authors’ long-term research goal is the realization of an advanced cognitive system able to support multiple assistive scenarios with adaptations over time. We here show how the integration of model-based and model-free AI technologies can contextualize robot assistive behaviors and dynamically decide what to do (assistive plan) and how to do it (assistive plan execution), according to the different features and needs of assisted persons. Although the approach is general, the paper specifically focuses on the synthesis of personalized therapies for (cognitive) stimulation of users.
{"title":"A Two-Layered Approach to Adaptive Dialogues for Robotic Assistance","authors":"Riccardo De Benedictis, A. Umbrico, Francesca Fracasso, Gabriella Cortellessa, Andrea Orlandini, A. Cesta","doi":"10.1109/RO-MAN47096.2020.9223605","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223605","url":null,"abstract":"Socially assistive robots should provide users with personalized assistance within a wide range of scenarios such as hospitals, home or social settings and private houses. Different people may have different needs both at the cognitive/physical support level and in relation to the preferences of interaction. Consequently the typology of tasks and the way the assistance is delivered can change according to the person with whom the robot is interacting. The authors’ long-term research goal is the realization of an advanced cognitive system able to support multiple assistive scenarios with adaptations over time. We here show how the integration of model-based and model-free AI technologies can contextualize robot assistive behaviors and dynamically decide what to do (assistive plan) and how to do it (assistive plan execution), according to the different features and needs of assisted persons. Although the approach is general, the paper specifically focuses on the synthesis of personalized therapies for (cognitive) stimulation of users.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"458 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115867870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223446
Oscar Thörn, Peter Knudsen, A. Saffiotti
Joint artistic performance, like music, dance or acting, provides an excellent domain to observe the mechanisms of human-human collaboration. In this paper, we use this domain to study human-robot collaboration and co-creation. We propose a general model in which an AI system mediates the interaction between a human performer and a robotic performer. We then instantiate this model in a case study, implemented using fuzzy logic techniques, in which a human pianist performs jazz improvisations, and a robot dancer performs classical dancing patterns in harmony with the artistic moods expressed by the human. The resulting system has been evaluated in an extensive user study, and successfully demonstrated in public live performances.
{"title":"Human-Robot Artistic Co-Creation: a Study in Improvised Robot Dance","authors":"Oscar Thörn, Peter Knudsen, A. Saffiotti","doi":"10.1109/RO-MAN47096.2020.9223446","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223446","url":null,"abstract":"Joint artistic performance, like music, dance or acting, provides an excellent domain to observe the mechanisms of human-human collaboration. In this paper, we use this domain to study human-robot collaboration and co-creation. We propose a general model in which an AI system mediates the interaction between a human performer and a robotic performer. We then instantiate this model in a case study, implemented using fuzzy logic techniques, in which a human pianist performs jazz improvisations, and a robot dancer performs classical dancing patterns in harmony with the artistic moods expressed by the human. The resulting system has been evaluated in an extensive user study, and successfully demonstrated in public live performances.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"384 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115974786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223474
Meng Wang, Yao Su, Hangxin Liu, Ying-Qing Xu
This paper presents WalkingBot, a modular robot system that allows non-expert users to build a multi-legged robot in various morphologies using a set of building blocks with sensors and actuators embedded. The kinematic model of the built robot is interpreted automatically and revealed in a customized GUI through an integrated hardware and software design, so that users can understand, control, and program the robot easily. A Model Predictive Control (MPC) scheme is introduced to generate a control policy for various motions (e.g. moving forward, turning left) corresponding to the sensed robot structure, affording rich robot motions right after assembling. Targeting different levels of programming skill, two programming methods, visual block programming and events programming, are also presented to enable users to create their own interactive legged robot.
{"title":"WalkingBot: Modular Interactive Legged Robot with Automated Structure Sensing and Motion Planning","authors":"Meng Wang, Yao Su, Hangxin Liu, Ying-Qing Xu","doi":"10.1109/RO-MAN47096.2020.9223474","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223474","url":null,"abstract":"This paper presents WalkingBot, a modular robot system that allows non-expert users to build a multi-legged robot in various morphologies using a set of building blocks with sensors and actuators embedded. The kinematic model of the built robot is interpreted automatically and revealed in a customized GUI through an integrated hardware and software design, so that users can understand, control, and program the robot easily. A Model Predictive Control (MPC) scheme is introduced to generate a control policy for various motions (e.g. moving forward, turning left) corresponding to the sensed robot structure, affording rich robot motions right after assembling. Targeting different levels of programming skill, two programming methods, visual block programming and events programming, are also presented to enable users to create their own interactive legged robot.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116193195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}