Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223563
Hala Khodr, Soheil Kianzad, W. Johal, Aditi Kothiyal, Barbara Bruno, P. Dillenbourg
Collaborative learning appears in a joint intellectual efforts of individuals to understand an object of knowledge collectively. In their search for understanding the problems, meanings, and solutions, learners employ different multi-modal strategies. In this work, we explore the role of force feedback in learners interaction with tangible hand-held robots. We designed a collaborative learning environment to provide embodied intuitions on linear mathematical functions combined with graphical representations and ran a first study involving 24 participants. Our analysis shows a positive learning gain for our learning activity. Moreover, to explore the link between different types of force feedback and learners’ collaboration, we designed a focus group study with 12 participants. Our results suggest that the haptic communication channel affects the collaboration dynamic differently according to the nature of the learning task. We finish by proposing design insights for future exploration of haptic in collaborative learning.
{"title":"AlloHaptic: Robot-Mediated Haptic Collaboration for Learning Linear Functions*","authors":"Hala Khodr, Soheil Kianzad, W. Johal, Aditi Kothiyal, Barbara Bruno, P. Dillenbourg","doi":"10.1109/RO-MAN47096.2020.9223563","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223563","url":null,"abstract":"Collaborative learning appears in a joint intellectual efforts of individuals to understand an object of knowledge collectively. In their search for understanding the problems, meanings, and solutions, learners employ different multi-modal strategies. In this work, we explore the role of force feedback in learners interaction with tangible hand-held robots. We designed a collaborative learning environment to provide embodied intuitions on linear mathematical functions combined with graphical representations and ran a first study involving 24 participants. Our analysis shows a positive learning gain for our learning activity. Moreover, to explore the link between different types of force feedback and learners’ collaboration, we designed a focus group study with 12 participants. Our results suggest that the haptic communication channel affects the collaboration dynamic differently according to the nature of the learning task. We finish by proposing design insights for future exploration of haptic in collaborative learning.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123620122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223429
Rui Li, Yingbai Hu, Yanjun Cao, Mengyao Li
A series of previous work has found that the environmental constraint (EC), which is the natural result of the contact of the robot and the interacting objects, is immensely helpful for the realization of high-precision robotic tasks. However, due to the existence of multifarious errors, such as mechanical error, modeling error and sensing error, there would be discrepancy between the actual constraints and the ideal models. In such case, it is hard to realize manipulation with EC-based strategies. Inspired by human, a preliminary framework which aims at the integration of the coarse sensing information and the environmental constraints is proposed for robotic manipulation. By mapping the sensing information into the new space, where the environmental constraint can be formally described, the region that integrates the sensing information and the environmental constraint is constructed and the conditions to achieve high-precision manipulation are derived. Based on the conditions, a motion planning strategy is proposed to achieve the required task. The effectiveness of this strategy is verified by case studies.
{"title":"A Framework for the Integration of Coarse Sensing Information and Environmental Constraints*","authors":"Rui Li, Yingbai Hu, Yanjun Cao, Mengyao Li","doi":"10.1109/RO-MAN47096.2020.9223429","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223429","url":null,"abstract":"A series of previous work has found that the environmental constraint (EC), which is the natural result of the contact of the robot and the interacting objects, is immensely helpful for the realization of high-precision robotic tasks. However, due to the existence of multifarious errors, such as mechanical error, modeling error and sensing error, there would be discrepancy between the actual constraints and the ideal models. In such case, it is hard to realize manipulation with EC-based strategies. Inspired by human, a preliminary framework which aims at the integration of the coarse sensing information and the environmental constraints is proposed for robotic manipulation. By mapping the sensing information into the new space, where the environmental constraint can be formally described, the region that integrates the sensing information and the environmental constraint is constructed and the conditions to achieve high-precision manipulation are derived. Based on the conditions, a motion planning strategy is proposed to achieve the required task. The effectiveness of this strategy is verified by case studies.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132881263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223532
Yixiao Wang, François Guimbretière, K. Green
Novel, "space-making" robots have potential to redefine physical space and the human activities occurring in it. Categorically distinct from many robots and far removed from humanoids, space-making robots are not objects in space, not anthropomorphic, not animal-like, not mobile, but instead, integral with the physical environment, embedded in or forming walls, ceilings, floors, partitions, vehicle interiors, and building envelopes. Given their distinctiveness, space-making robots offer a novel human-machine interaction. This paper investigates whether users perceive space-making robots as agents— artificial social actors characterized by the capacity for intelligence, recognition, and intention. Results of an in-lab experiment with 11 participants and an online, between-group experiment with 120 participants show that people attribute agency metrics of intelligence, intention, recognition, cooperation, collaboration, friendliness, and welcome to our reconfigurable robotic surface embedded in a wall partition. While space-making robots may become numerous in the built environment, our results are significant, moreover, for their broader implications for conceptualizing and designing human-machine interactions.
{"title":"Are Space-making Robots, Agents? Investigations on User Perception of an Embedded Robotic Surface","authors":"Yixiao Wang, François Guimbretière, K. Green","doi":"10.1109/RO-MAN47096.2020.9223532","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223532","url":null,"abstract":"Novel, \"space-making\" robots have potential to redefine physical space and the human activities occurring in it. Categorically distinct from many robots and far removed from humanoids, space-making robots are not objects in space, not anthropomorphic, not animal-like, not mobile, but instead, integral with the physical environment, embedded in or forming walls, ceilings, floors, partitions, vehicle interiors, and building envelopes. Given their distinctiveness, space-making robots offer a novel human-machine interaction. This paper investigates whether users perceive space-making robots as agents— artificial social actors characterized by the capacity for intelligence, recognition, and intention. Results of an in-lab experiment with 11 participants and an online, between-group experiment with 120 participants show that people attribute agency metrics of intelligence, intention, recognition, cooperation, collaboration, friendliness, and welcome to our reconfigurable robotic surface embedded in a wall partition. While space-making robots may become numerous in the built environment, our results are significant, moreover, for their broader implications for conceptualizing and designing human-machine interactions.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132033568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223510
H. Shehu, Will N. Browne, H. Eisenbarth
Emotion recognition has become an increasingly important area of research due to the increasing number of CCTV cameras in the past few years. Deep network-based methods have made impressive progress in performing emotion recognition-based tasks, achieving high performance on many datasets and their related competitions such as the ImageNet challenge. However, deep networks are vulnerable to adversarial attacks. Due to their homogeneous representation of knowledge across all images, a small change to the input image made by an adversary might result in a large decrease in the accuracy of the algorithm. By detecting heterogeneous facial landmarks using the machine learning library Dlib we hypothesize we can build robustness to adversarial attacks. The residual neural network (ResNet) model has been used as an example of a deep learning model. While the accuracy achieved by ResNet showed a decrease of up to 22%, our proposed approach has shown strong resistance to an attack and showed only a little (< 0.3%) or no decrease when the attack is launched on the data. Furthermore, the proposed approach has shown considerably less execution time compared to the ResNet model.
{"title":"An Adversarial Attacks Resistance-based Approach to Emotion Recognition from Images using Facial Landmarks","authors":"H. Shehu, Will N. Browne, H. Eisenbarth","doi":"10.1109/RO-MAN47096.2020.9223510","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223510","url":null,"abstract":"Emotion recognition has become an increasingly important area of research due to the increasing number of CCTV cameras in the past few years. Deep network-based methods have made impressive progress in performing emotion recognition-based tasks, achieving high performance on many datasets and their related competitions such as the ImageNet challenge. However, deep networks are vulnerable to adversarial attacks. Due to their homogeneous representation of knowledge across all images, a small change to the input image made by an adversary might result in a large decrease in the accuracy of the algorithm. By detecting heterogeneous facial landmarks using the machine learning library Dlib we hypothesize we can build robustness to adversarial attacks. The residual neural network (ResNet) model has been used as an example of a deep learning model. While the accuracy achieved by ResNet showed a decrease of up to 22%, our proposed approach has shown strong resistance to an attack and showed only a little (< 0.3%) or no decrease when the attack is launched on the data. Furthermore, the proposed approach has shown considerably less execution time compared to the ResNet model.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129254868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223442
Seiya Mitsuno, Y. Yoshikawa, H. Ishiguro
In recent years, a substantial amount of research has been aimed at realizing a social robot that can maintain long-term user interest. One approach is using a dialogue strategy in which the robot makes a remark based on previous dialogues with users. However, privacy problems may occur owing to private information of the user being mentioned. We propose a novel dialogue strategy whereby a robot mentions another robot in the form of gossiping. This dialogue strategy can improve the sense of conversation, which results in increased interest while avoiding the privacy issue. We examined our proposal by conducting a conversation experiment evaluated by subject impressions. The results demonstrated that the proposed method could help the robot to obtain higher evaluations. In particular, the perceived mind was improved in the Likert scale evaluation, whereas the robot empathy and intention to use were improved in the binary comparison evaluation. Our dialogue strategy may contribute to understanding the factors regarding the sense of conversation, thereby adding value to the field of human-robot interaction.
{"title":"Robot-on-Robot Gossiping to Improve Sense of Human-Robot Conversation","authors":"Seiya Mitsuno, Y. Yoshikawa, H. Ishiguro","doi":"10.1109/RO-MAN47096.2020.9223442","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223442","url":null,"abstract":"In recent years, a substantial amount of research has been aimed at realizing a social robot that can maintain long-term user interest. One approach is using a dialogue strategy in which the robot makes a remark based on previous dialogues with users. However, privacy problems may occur owing to private information of the user being mentioned. We propose a novel dialogue strategy whereby a robot mentions another robot in the form of gossiping. This dialogue strategy can improve the sense of conversation, which results in increased interest while avoiding the privacy issue. We examined our proposal by conducting a conversation experiment evaluated by subject impressions. The results demonstrated that the proposed method could help the robot to obtain higher evaluations. In particular, the perceived mind was improved in the Likert scale evaluation, whereas the robot empathy and intention to use were improved in the binary comparison evaluation. Our dialogue strategy may contribute to understanding the factors regarding the sense of conversation, thereby adding value to the field of human-robot interaction.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126471908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223501
Adriana Bono, A. Augello, M. Gentile, S. Gaglio
In this work, we present a robotic storytelling system, where the characters have been modelled as cognitive agents embodied in Pepper and NAO robots. The characters have been designed by exploiting the ACT-R architecture, taking into account knowledge, behaviours, norms, and expectations typical of social practices and desires resulting from their personality. The characters explain their reasoning processes during the narration, through a sort of internal dialogue that generate a high level of credibility experienced over the audience.
{"title":"Social Practices based characters in a Robotic Storytelling System","authors":"Adriana Bono, A. Augello, M. Gentile, S. Gaglio","doi":"10.1109/RO-MAN47096.2020.9223501","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223501","url":null,"abstract":"In this work, we present a robotic storytelling system, where the characters have been modelled as cognitive agents embodied in Pepper and NAO robots. The characters have been designed by exploiting the ACT-R architecture, taking into account knowledge, behaviours, norms, and expectations typical of social practices and desires resulting from their personality. The characters explain their reasoning processes during the narration, through a sort of internal dialogue that generate a high level of credibility experienced over the audience.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121325954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223578
Shabnam FakhrHosseini, Chaiwoo Lee, Julie B. Miller, Taylor R. Patskanick, J. Coughlin
During the last decade, attention towards using social robots as a potential technology to improve the older adults’ life quality has increased. Loneliness, caregiving, medication management, and activities of daily living are some of the topics that researchers are trying to address with the social robots.Although research has uncovered important factors in acceptance of social robots, older adults, especially the oldest old population who are 85 years old and older, have been underrepresented in these studies. In this study, a panel of older adults 85 years of age and older were recruited to address this gap by understanding their attitudes towards and experiences with smart technologies and social robots.The panel engagement included three parts. First, participants completed a questionnaire about technology adoption, trust in technology, and acceptance of social robots. An in-person meeting was then convened with the participants. During the meeting, participants were presented an overview and a demonstration of social robots and smart virtual assistants. Lastly, participants discussed their opinions about technology in general and social robots specifically in smaller focus groups assigned based on level of technology experience.Results show that older adults’ acceptance of social robots as companions was positively impacted by their experience of seeing the robots and their limited interaction with it. However, this impact only has been seen in early and middle-of-the-road adopter groups. Findings have been discussed with the role of important variables on older adults’ acceptance of social robots as their companions.
{"title":"Older Adults’ Opinion on Social Robot as Companion","authors":"Shabnam FakhrHosseini, Chaiwoo Lee, Julie B. Miller, Taylor R. Patskanick, J. Coughlin","doi":"10.1109/RO-MAN47096.2020.9223578","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223578","url":null,"abstract":"During the last decade, attention towards using social robots as a potential technology to improve the older adults’ life quality has increased. Loneliness, caregiving, medication management, and activities of daily living are some of the topics that researchers are trying to address with the social robots.Although research has uncovered important factors in acceptance of social robots, older adults, especially the oldest old population who are 85 years old and older, have been underrepresented in these studies. In this study, a panel of older adults 85 years of age and older were recruited to address this gap by understanding their attitudes towards and experiences with smart technologies and social robots.The panel engagement included three parts. First, participants completed a questionnaire about technology adoption, trust in technology, and acceptance of social robots. An in-person meeting was then convened with the participants. During the meeting, participants were presented an overview and a demonstration of social robots and smart virtual assistants. Lastly, participants discussed their opinions about technology in general and social robots specifically in smaller focus groups assigned based on level of technology experience.Results show that older adults’ acceptance of social robots as companions was positively impacted by their experience of seeing the robots and their limited interaction with it. However, this impact only has been seen in early and middle-of-the-road adopter groups. Findings have been discussed with the role of important variables on older adults’ acceptance of social robots as their companions.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123055857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223524
Shujie Zhou, Leimin Tian
With recent advancements in robotics and artificial intelligence, human-robot collaboration has drawn growing interests. In human collaboration, emotion can serve as an evaluation of events and as a communicative cue for people to express and perceive each other’s internal states. Thus, we are motivated to investigate the influence of robots’ emotional expressions on human-robot collaboration. In particular, we conducted experiments in which a participant interacted with two Cozmo robots in a collaborative game. We found that when the robots exhibited emotional expressions, participants were more likely to collaborate with them and achieved task success in shorter time. Moreover, participants perceived emotional robots more positively and reported to have a more enjoyable experience interacting with them. Our study provides insights on the benefit of incorporating artificial emotions in robots on human-robot collaboration and interaction.
{"title":"Would you help a sad robot? Influence of robots’ emotional expressions on human-multi-robot collaboration","authors":"Shujie Zhou, Leimin Tian","doi":"10.1109/RO-MAN47096.2020.9223524","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223524","url":null,"abstract":"With recent advancements in robotics and artificial intelligence, human-robot collaboration has drawn growing interests. In human collaboration, emotion can serve as an evaluation of events and as a communicative cue for people to express and perceive each other’s internal states. Thus, we are motivated to investigate the influence of robots’ emotional expressions on human-robot collaboration. In particular, we conducted experiments in which a participant interacted with two Cozmo robots in a collaborative game. We found that when the robots exhibited emotional expressions, participants were more likely to collaborate with them and achieved task success in shorter time. Moreover, participants perceived emotional robots more positively and reported to have a more enjoyable experience interacting with them. Our study provides insights on the benefit of incorporating artificial emotions in robots on human-robot collaboration and interaction.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"356 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123106162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223470
S. Cooper, Alessandro Di Fava, Carlos Vivas, Luca Marchionni, F. Ferro
With the world population aging and the number of healthcare users with multiple chronic diseases increasing, healthcare is becoming more costly, and as such, the need to optimise both hospital and in-home care is of paramount importance. This paper reviews the challenges that the older people, people with mobility constraints, hospital patients and isolated healthcare users face, and how socially assistive robots can be used to help them. Related promising areas and limitations are highlighted. The main focus is placed on the newest PAL Robotics’ robot: ARI, a high-performance social robot and companion designed for a wide range of multi-modal expressive gestures, gaze and personalised behaviour, with great potential to become part of the healthcare community by applying powerful AI algorithms. ARI can be used to help administer first-care attention, providing emotional support to people who live in isolation, including the elderly population or healthcare users who are confined because of infectious diseases such as Covid-19. The ARI robot technical features and potential applications are introduced in this paper.
{"title":"ARI: the Social Assistive Robot and Companion","authors":"S. Cooper, Alessandro Di Fava, Carlos Vivas, Luca Marchionni, F. Ferro","doi":"10.1109/RO-MAN47096.2020.9223470","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223470","url":null,"abstract":"With the world population aging and the number of healthcare users with multiple chronic diseases increasing, healthcare is becoming more costly, and as such, the need to optimise both hospital and in-home care is of paramount importance. This paper reviews the challenges that the older people, people with mobility constraints, hospital patients and isolated healthcare users face, and how socially assistive robots can be used to help them. Related promising areas and limitations are highlighted. The main focus is placed on the newest PAL Robotics’ robot: ARI, a high-performance social robot and companion designed for a wide range of multi-modal expressive gestures, gaze and personalised behaviour, with great potential to become part of the healthcare community by applying powerful AI algorithms. ARI can be used to help administer first-care attention, providing emotional support to people who live in isolation, including the elderly population or healthcare users who are confined because of infectious diseases such as Covid-19. The ARI robot technical features and potential applications are introduced in this paper.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123770083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223572
Jing Luo, Chenguang Yang, E. Burdet, Yanan Li
In human-robot collaborative transportation and sawing tasks, the human operator physically interacts with the robot and directs the robot’s movement by applying an interaction force. The robot needs to update its control strategy to adapt to the interaction with the human and to minimize the interaction force. To this end, we propose an integrated algorithm of robot’s trajectory adaptation and adaptive impedance control to minimize the interaction force in physical humanrobot interaction (pHRI) and to guarantee the performance of the collaboration tasks. We firstly utilize the information of the interaction force to regulate the robot’s reference trajectory. Then, an adaptive impedance controller is developed to ensure automatic adaptation of the robot’s impedance parameters. While one can reduce the interaction force by using either trajectory adaptation or adaptive impedance control, we investigate the task performance when combining both. Experimental results on a planar robotic platform verify the effectiveness of the proposed method.
{"title":"Adaptive impedance control with trajectory adaptation for minimizing interaction force","authors":"Jing Luo, Chenguang Yang, E. Burdet, Yanan Li","doi":"10.1109/RO-MAN47096.2020.9223572","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223572","url":null,"abstract":"In human-robot collaborative transportation and sawing tasks, the human operator physically interacts with the robot and directs the robot’s movement by applying an interaction force. The robot needs to update its control strategy to adapt to the interaction with the human and to minimize the interaction force. To this end, we propose an integrated algorithm of robot’s trajectory adaptation and adaptive impedance control to minimize the interaction force in physical humanrobot interaction (pHRI) and to guarantee the performance of the collaboration tasks. We firstly utilize the information of the interaction force to regulate the robot’s reference trajectory. Then, an adaptive impedance controller is developed to ensure automatic adaptation of the robot’s impedance parameters. While one can reduce the interaction force by using either trajectory adaptation or adaptive impedance control, we investigate the task performance when combining both. Experimental results on a planar robotic platform verify the effectiveness of the proposed method.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121575548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}