Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900844
Jun San Kim, Dahyun Kang, Jongsuk Choi, Sonya S. Kwak
This study explores the effects of the positioning strategy of domestic social robots on the purchase intention of consumers. Specifically, the authors investigate the effects of robot positioning as companions with as assistants and as appliances. The study results showed that the participants preferred the domestic social robots positioned as assistants rather than as companions. Moreover, for male participants, the positioning of domestic social robots as appliances was also preferred over robots positioned as companions. The study results also showed that the effects of positioning on the purchase intention were mediated by the participants’ perception of usefulness regarding the robot.
{"title":"Domestic Social Robots as Companions or Assistants? The Effects of the Robot Positioning on the Consumer Purchase Intentions*","authors":"Jun San Kim, Dahyun Kang, Jongsuk Choi, Sonya S. Kwak","doi":"10.1109/RO-MAN53752.2022.9900844","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900844","url":null,"abstract":"This study explores the effects of the positioning strategy of domestic social robots on the purchase intention of consumers. Specifically, the authors investigate the effects of robot positioning as companions with as assistants and as appliances. The study results showed that the participants preferred the domestic social robots positioned as assistants rather than as companions. Moreover, for male participants, the positioning of domestic social robots as appliances was also preferred over robots positioned as companions. The study results also showed that the effects of positioning on the purchase intention were mediated by the participants’ perception of usefulness regarding the robot.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"17 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125273045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900632
Youssef Hmamouche, M. Ochs, T. Chaminade, Laurent Prévot
In the well-known Turing test, humans have to judge whether they write to another human or a chatbot. In this article, we propose a reversed Turing test adapted to live conversations: based on the speech of the human, we have developed a model that automatically detects whether she/he speaks to an artificial agent or a human. We propose in this work a prediction methodology combining a step of specific features extraction from behaviour and a specific deep learning model based on recurrent neural networks. The prediction results show that our approach, and more particularly the considered features, improves significantly the predictions compared to the traditional approach in the field of automatic speech recognition systems, which is based on spectral features, such as Mel-frequency Cepstral Coefficients (MFCCs). Our approach allows evaluating automatically the type of conversational agent, human or artificial agent, solely based on the speech of the human interlocutor. Most importantly, this model provides a novel and very promising approach to weigh the importance of the behaviour cues used to make correctly recognize the nature of the interlocutor, in other words, what aspects of the human behaviour adapts to the nature of its interlocutor.
{"title":"Listen and tell me who the user is talking to: Automatic detection of the interlocutor’s type during a conversation","authors":"Youssef Hmamouche, M. Ochs, T. Chaminade, Laurent Prévot","doi":"10.1109/RO-MAN53752.2022.9900632","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900632","url":null,"abstract":"In the well-known Turing test, humans have to judge whether they write to another human or a chatbot. In this article, we propose a reversed Turing test adapted to live conversations: based on the speech of the human, we have developed a model that automatically detects whether she/he speaks to an artificial agent or a human. We propose in this work a prediction methodology combining a step of specific features extraction from behaviour and a specific deep learning model based on recurrent neural networks. The prediction results show that our approach, and more particularly the considered features, improves significantly the predictions compared to the traditional approach in the field of automatic speech recognition systems, which is based on spectral features, such as Mel-frequency Cepstral Coefficients (MFCCs). Our approach allows evaluating automatically the type of conversational agent, human or artificial agent, solely based on the speech of the human interlocutor. Most importantly, this model provides a novel and very promising approach to weigh the importance of the behaviour cues used to make correctly recognize the nature of the interlocutor, in other words, what aspects of the human behaviour adapts to the nature of its interlocutor.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130861997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900547
Daniel Schäle, M. Stoelen, E. Kyrkjebø
For a successful deployment of physical Human-Robot Cooperation (pHRC), humans need to be able to teach robots new motor skills quickly. Probabilistic movement primitives (ProMPs) are a promising method to encode a robot’s motor skills learned from human demonstrations in pHRC settings. However, most algorithms to learn ProMPs from human demonstrations operate in batch mode, which is not ideal in pHRC when we want humans and robots to work together from even the first demonstration. In this paper, we propose a new learning algorithm to learn ProMPs incre-mentally and continuously in pHRC settings. Our algorithm incorporates new demonstrations sequentially as they arrive, allowing humans to observe the robot’s learning progress and incrementally shape the robot’s motor skill. A built-in forgetting factor allows for corrective demonstrations resulting from the human’s learning curve or changes in task constraints. We compare the performance of our algorithm to existing batch ProMP algorithms on reference data generated from a pick-and-place task at our lab. Furthermore, we demonstrate how the forgetting factor allows us to adapt to changes in the task. The incremental learning algorithm presented in this paper has the potential to lead to a more intuitive learning progress and to establish a successful cooperation between human and robot faster than training in batch mode.
为了成功地部署物理人机合作(pHRC),人类需要能够快速教会机器人新的运动技能。概率运动原语(Probabilistic movement primitives, ProMPs)是一种很有前途的方法,可以对机器人在pHRC环境中从人类演示中学习到的运动技能进行编码。然而,大多数从人类演示中学习promp的算法都是以批处理模式运行的,当我们希望人类和机器人从第一次演示开始就一起工作时,这在pHRC中并不理想。在本文中,我们提出了一种新的学习算法,用于在pHRC环境中增量和连续地学习promp。我们的算法结合了新的演示顺序,因为他们到达,允许人类观察机器人的学习进度,并逐步塑造机器人的运动技能。内置的遗忘因素允许由于人类的学习曲线或任务限制的变化而产生的纠正性演示。我们将算法的性能与现有的批量ProMP算法在实验室的拾取任务生成的参考数据上进行了比较。此外,我们还展示了遗忘因素如何使我们适应任务中的变化。本文提出的增量学习算法有可能导致更直观的学习过程,并比批处理模式更快地建立人与机器人之间的成功合作。
{"title":"Continuous and Incremental Learning in physical Human-Robot Cooperation using Probabilistic Movement Primitives","authors":"Daniel Schäle, M. Stoelen, E. Kyrkjebø","doi":"10.1109/RO-MAN53752.2022.9900547","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900547","url":null,"abstract":"For a successful deployment of physical Human-Robot Cooperation (pHRC), humans need to be able to teach robots new motor skills quickly. Probabilistic movement primitives (ProMPs) are a promising method to encode a robot’s motor skills learned from human demonstrations in pHRC settings. However, most algorithms to learn ProMPs from human demonstrations operate in batch mode, which is not ideal in pHRC when we want humans and robots to work together from even the first demonstration. In this paper, we propose a new learning algorithm to learn ProMPs incre-mentally and continuously in pHRC settings. Our algorithm incorporates new demonstrations sequentially as they arrive, allowing humans to observe the robot’s learning progress and incrementally shape the robot’s motor skill. A built-in forgetting factor allows for corrective demonstrations resulting from the human’s learning curve or changes in task constraints. We compare the performance of our algorithm to existing batch ProMP algorithms on reference data generated from a pick-and-place task at our lab. Furthermore, we demonstrate how the forgetting factor allows us to adapt to changes in the task. The incremental learning algorithm presented in this paper has the potential to lead to a more intuitive learning progress and to establish a successful cooperation between human and robot faster than training in batch mode.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128850706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900752
Kerl Galindo, Deborah Szapiro, R. Gomez
The eyes play a significant role in how robots are perceived socially by humans due to the eye’s centrality in human communication. To date there has been no consistent or reliable system for designing and transferring affective emotional eye gestures to anthropomorphized social robots. Combining research findings from Oculesics, Laban Movement Analysis and the Twelve Principles of Animation, this paper discusses the design and evaluation of the prototype LMA12-O framework for the purpose of maximising the emotive communication potential of eye gestures in anthropomorphized social robots. Results of initial user testings evidenced LMA12-O to be effective in designing affective emotional eye gestures in the test robot with important considerations for future iterations of this framework.
{"title":"The LMA12-O Framework for Emotional Robot Eye Gestures","authors":"Kerl Galindo, Deborah Szapiro, R. Gomez","doi":"10.1109/RO-MAN53752.2022.9900752","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900752","url":null,"abstract":"The eyes play a significant role in how robots are perceived socially by humans due to the eye’s centrality in human communication. To date there has been no consistent or reliable system for designing and transferring affective emotional eye gestures to anthropomorphized social robots. Combining research findings from Oculesics, Laban Movement Analysis and the Twelve Principles of Animation, this paper discusses the design and evaluation of the prototype LMA12-O framework for the purpose of maximising the emotive communication potential of eye gestures in anthropomorphized social robots. Results of initial user testings evidenced LMA12-O to be effective in designing affective emotional eye gestures in the test robot with important considerations for future iterations of this framework.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"16 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125783437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900738
Qinghua Chen, Evan Dallas, Pourya Shahverdi, Jessica Korneder, O. Rawashdeh, W. Louie
Learning from demonstration (LfD) approaches have garnered significant interest for teaching social robots a variety of tasks in healthcare, educational, and service domains after they have been deployed. These LfD approaches often require a significant number of demonstrations for a robot to learn a performant model from task demonstrations. However, requiring non-experts to provide numerous demonstrations for a social robot to learn a task is impractical in real-world applications. In this paper, we propose a method to improve the sample efficiency of existing learning from demonstration approaches via data augmentation, dynamic experience replay sizes, and hierarchical Deep Q-Networks (DQN). After validating our methods on two different datasets, results suggest that our proposed hierarchical DQN is effective for improving sample efficiency when learning tasks from demonstration. In the future, such a sample-efficient approach has the potential to improve our ability to apply LfD approaches for social robots to learn tasks in domains where demonstration data is limited, sparse, and imbalanced.
{"title":"A Sample Efficiency Improved Method via Hierarchical Reinforcement Learning Networks","authors":"Qinghua Chen, Evan Dallas, Pourya Shahverdi, Jessica Korneder, O. Rawashdeh, W. Louie","doi":"10.1109/RO-MAN53752.2022.9900738","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900738","url":null,"abstract":"Learning from demonstration (LfD) approaches have garnered significant interest for teaching social robots a variety of tasks in healthcare, educational, and service domains after they have been deployed. These LfD approaches often require a significant number of demonstrations for a robot to learn a performant model from task demonstrations. However, requiring non-experts to provide numerous demonstrations for a social robot to learn a task is impractical in real-world applications. In this paper, we propose a method to improve the sample efficiency of existing learning from demonstration approaches via data augmentation, dynamic experience replay sizes, and hierarchical Deep Q-Networks (DQN). After validating our methods on two different datasets, results suggest that our proposed hierarchical DQN is effective for improving sample efficiency when learning tasks from demonstration. In the future, such a sample-efficient approach has the potential to improve our ability to apply LfD approaches for social robots to learn tasks in domains where demonstration data is limited, sparse, and imbalanced.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127090460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900538
Emanuele Antonioni, Cristiana Sanalitro, O. Capirci, Alessio Di Renzo, Maria Beatrice D'Aversa, D. Bloisi, Lun Wang, Ermanno Bartoli, Lorenzo Diaco, V. Presutti, D. Nardi
The success of the interaction between the robotics community and the users of these services is an aspect of considerable importance in the drafting of the development plan of any technology. This aspect becomes even more relevant when dealing with sensitive services and issues such as those related to interaction with specific subgroups of any population. Over the years, there have been few successes in integrating and proposing technologies related to deafness and sign language. Instead, in this paper, we propose an account of successful interaction between a signatory robot and the Italian deaf community, which occurred during the Smart City Robotics Challenge (SciRoc) 2021 competition1. Thanks to the use of a participatory design and the involvement of experts belonging to the deaf community from the early stages of the project, it was possible to create a technology that has achieved significant results in terms of acceptance by the community itself and could lead to significant results in the technology development as well.
{"title":"Nothing About Us Without Us: a participatory design for an Inclusive Signing Tiago Robot","authors":"Emanuele Antonioni, Cristiana Sanalitro, O. Capirci, Alessio Di Renzo, Maria Beatrice D'Aversa, D. Bloisi, Lun Wang, Ermanno Bartoli, Lorenzo Diaco, V. Presutti, D. Nardi","doi":"10.1109/RO-MAN53752.2022.9900538","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900538","url":null,"abstract":"The success of the interaction between the robotics community and the users of these services is an aspect of considerable importance in the drafting of the development plan of any technology. This aspect becomes even more relevant when dealing with sensitive services and issues such as those related to interaction with specific subgroups of any population. Over the years, there have been few successes in integrating and proposing technologies related to deafness and sign language. Instead, in this paper, we propose an account of successful interaction between a signatory robot and the Italian deaf community, which occurred during the Smart City Robotics Challenge (SciRoc) 2021 competition1. Thanks to the use of a participatory design and the involvement of experts belonging to the deaf community from the early stages of the project, it was possible to create a technology that has achieved significant results in terms of acceptance by the community itself and could lead to significant results in the technology development as well.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114201449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900508
M. Alimardani, Jishnu Harinandansingh, Lindsey Ravin, M. Haas
Social robots have been shown effective in pedagogical settings due to their embodiment and social behavior that can improve a learner’s motivation and engagement. In this study, the impact of a social robot’s motivational gestures in robot-assisted language learning (RALL) was investigated. Twenty-five university students participated in a language learning task tutored by a NAO robot under two conditions (within-subjects design); in one condition the robot provided positive and negative feedback on participant’s performance using both verbal and non-verbal behavior (Gesture condition), in another condition the robot only employed verbal feedback (No-Gesture condition). To assess cognitive engagement and learning in each condition, we collected EEG brain activity from the participants during the interaction and evaluated their word knowledge during an immediate and delayed post-test. No significant difference was found with respect to cognitive engagement as quantified by the EEG Engagement Index during the practice phase. Similarly, the word test results indicated an overall high performance in both conditions, suggesting similar learning gain regardless of the robot’s gestures. These findings do not provide evidence in favor of robot’s motivational gestures during language learning tasks but at the same time indicate challenges with respect to the design of effective social behavior for pedagogical robots.
{"title":"Motivational Gestures in Robot-Assisted Language Learning: A Study of Cognitive Engagement using EEG Brain Activity","authors":"M. Alimardani, Jishnu Harinandansingh, Lindsey Ravin, M. Haas","doi":"10.1109/RO-MAN53752.2022.9900508","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900508","url":null,"abstract":"Social robots have been shown effective in pedagogical settings due to their embodiment and social behavior that can improve a learner’s motivation and engagement. In this study, the impact of a social robot’s motivational gestures in robot-assisted language learning (RALL) was investigated. Twenty-five university students participated in a language learning task tutored by a NAO robot under two conditions (within-subjects design); in one condition the robot provided positive and negative feedback on participant’s performance using both verbal and non-verbal behavior (Gesture condition), in another condition the robot only employed verbal feedback (No-Gesture condition). To assess cognitive engagement and learning in each condition, we collected EEG brain activity from the participants during the interaction and evaluated their word knowledge during an immediate and delayed post-test. No significant difference was found with respect to cognitive engagement as quantified by the EEG Engagement Index during the practice phase. Similarly, the word test results indicated an overall high performance in both conditions, suggesting similar learning gain regardless of the robot’s gestures. These findings do not provide evidence in favor of robot’s motivational gestures during language learning tasks but at the same time indicate challenges with respect to the design of effective social behavior for pedagogical robots.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114600426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Humanoid robots provide a unique opportunity for personalized interaction using emotion recognition. However, emotion recognition performed by humanoid robots in complex social interactions is limited in the flexibility of interaction as well as personalization and adaptation in the responses. We designed an adaptive learning system for real-time emotion recognition that elicits its own ground-truth data and updates individualized models to improve performance over time. A Convolutional Neural Network based on off-the-shelf ResNet50 and Inception v3 are assembled to form an ensemble model which is used for real-time emotion recognition through facial expression. Two sets of robot behaviors, general and personalized, are developed to evoke different emotion responses. The personalized behaviors are adapted based on user preferences collected through a pre-test survey. The performance of the proposed system is verified through a 2-stage user study and tested for the accuracy of the self-supervised retraining. We also evaluate the effectiveness of the personalized behavior of the robot in evoking intended emotions between stages using trust, empathy and engagement scales. The participants are divided into two groups based on their familiarity and previous interactions with the robot. The results of emotion recognition indicate a 12% increase in the F1 score for 7 emotions in stage 2 compared to pre-trained model. Higher mean scores for trust, engagement, and empathy are observed in both participant groups. The average similarity score for both stages was 82% and the average success rate of eliciting the intended emotion increased by 8.28% between stages, despite their differences in familiarity thus offering a way to mitigate novelty effect patterns among user interactions.
{"title":"A Self Learning System for Emotion Awareness and Adaptation in Humanoid Robots","authors":"Sudhir Shenoy, Yusheng Jiang, Tyler Lynch, Lauren Isabelle Manuel, Afsaneh Doryab","doi":"10.1109/RO-MAN53752.2022.9900581","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900581","url":null,"abstract":"Humanoid robots provide a unique opportunity for personalized interaction using emotion recognition. However, emotion recognition performed by humanoid robots in complex social interactions is limited in the flexibility of interaction as well as personalization and adaptation in the responses. We designed an adaptive learning system for real-time emotion recognition that elicits its own ground-truth data and updates individualized models to improve performance over time. A Convolutional Neural Network based on off-the-shelf ResNet50 and Inception v3 are assembled to form an ensemble model which is used for real-time emotion recognition through facial expression. Two sets of robot behaviors, general and personalized, are developed to evoke different emotion responses. The personalized behaviors are adapted based on user preferences collected through a pre-test survey. The performance of the proposed system is verified through a 2-stage user study and tested for the accuracy of the self-supervised retraining. We also evaluate the effectiveness of the personalized behavior of the robot in evoking intended emotions between stages using trust, empathy and engagement scales. The participants are divided into two groups based on their familiarity and previous interactions with the robot. The results of emotion recognition indicate a 12% increase in the F1 score for 7 emotions in stage 2 compared to pre-trained model. Higher mean scores for trust, engagement, and empathy are observed in both participant groups. The average similarity score for both stages was 82% and the average success rate of eliciting the intended emotion increased by 8.28% between stages, despite their differences in familiarity thus offering a way to mitigate novelty effect patterns among user interactions.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126019013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900665
Amandine Mayima, A. Clodic, R. Alami
The supervision component is the binder of a robotic architecture. Without it, there is no task, no interaction happening, it conducts the other components of the architecture towards the achievement of a goal, which means, in the context of a collaboration with a human, to bring changes in the physical environment and to update the human partner mental state. However, not so much work focus on this component in charge of the robot decision-making and control, whereas this is the robot puppeteer. Most often, either tasks are simply scripted, or the supervisor is built for a specific task. Thus, we propose JAHRVIS, a Joint Action-based Human-aware supeRVISor. It aims at being task-independent while implementing a set of key joint action and collaboration mechanisms. With this contribution, we intend to move the deployment of autonomous collaborative robots forward, accompanying this paper with our open-source code.
{"title":"JAHRVIS, a Supervision System for Human-Robot Collaboration","authors":"Amandine Mayima, A. Clodic, R. Alami","doi":"10.1109/RO-MAN53752.2022.9900665","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900665","url":null,"abstract":"The supervision component is the binder of a robotic architecture. Without it, there is no task, no interaction happening, it conducts the other components of the architecture towards the achievement of a goal, which means, in the context of a collaboration with a human, to bring changes in the physical environment and to update the human partner mental state. However, not so much work focus on this component in charge of the robot decision-making and control, whereas this is the robot puppeteer. Most often, either tasks are simply scripted, or the supervisor is built for a specific task. Thus, we propose JAHRVIS, a Joint Action-based Human-aware supeRVISor. It aims at being task-independent while implementing a set of key joint action and collaboration mechanisms. With this contribution, we intend to move the deployment of autonomous collaborative robots forward, accompanying this paper with our open-source code.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126708044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900643
Yosuke Kawasaki, Masaki Takahashi
Social robots are used to perform mobile manipulation tasks, such as tidying up and carrying, based on instructions provided by humans. A mobile manipulation planner, which is used to exploit the robot’s functions, requires a better understanding of the feasible actions in real space based on the robot’s subsystem configuration and the object placement in the environment. This study aims to realize a mobile manipulation planner considering the world state, which consists of the robot state (subsystem configuration and their state) required to exploit the robot’s functions. In this paper, this study proposes a novel environmental representation called a world state-dependent action graph (WDAG). The WDAG represents the spatial and temporal order of feasible actions based on the world state by adopting the knowledge representation with scene graphs and a recursive multilayered graph structure. The study also proposes a mobile manipulation planning method using the WDAG. The planner enables the derivation of many effective action sequences to accomplish the given tasks based on an exhaustive understanding of the spatial and temporal connections of actions. The effectiveness of the proposed method is evaluated through practical machine experiments performed. The experimental result demonstrates that the proposed method facilitates the effective utilization of the robot’s functions.
{"title":"Spatio-Temporal Action Order Representation for Mobile Manipulation Planning*","authors":"Yosuke Kawasaki, Masaki Takahashi","doi":"10.1109/RO-MAN53752.2022.9900643","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900643","url":null,"abstract":"Social robots are used to perform mobile manipulation tasks, such as tidying up and carrying, based on instructions provided by humans. A mobile manipulation planner, which is used to exploit the robot’s functions, requires a better understanding of the feasible actions in real space based on the robot’s subsystem configuration and the object placement in the environment. This study aims to realize a mobile manipulation planner considering the world state, which consists of the robot state (subsystem configuration and their state) required to exploit the robot’s functions. In this paper, this study proposes a novel environmental representation called a world state-dependent action graph (WDAG). The WDAG represents the spatial and temporal order of feasible actions based on the world state by adopting the knowledge representation with scene graphs and a recursive multilayered graph structure. The study also proposes a mobile manipulation planning method using the WDAG. The planner enables the derivation of many effective action sequences to accomplish the given tasks based on an exhaustive understanding of the spatial and temporal connections of actions. The effectiveness of the proposed method is evaluated through practical machine experiments performed. The experimental result demonstrates that the proposed method facilitates the effective utilization of the robot’s functions.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129211046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}