首页 > 最新文献

2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)最新文献

英文 中文
Let me join you! Real-time F-formation recognition by a socially aware robot 让我加入你们!社会感知机器人的实时f形识别
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223469
Hrishav Bakul Barua, Pradip Pramanick, Chayan Sarkar, Theint Haythi Mg
This paper presents a novel architecture to detect social groups in real-time from a continuous image stream of an ego-vision camera. F-formation defines social orientations in space where two or more person tends to communicate in a social place. Thus, essentially, we detect F-formations in social gatherings such as meetings, discussions, etc. and predict the robot’s approach angle if it wants to join the social group. Additionally, we also detect outliers, i.e., the persons who are not part of the group under consideration. Our proposed pipeline consists of – a) a skeletal key points estimator (a total of 17) for the detected human in the scene, b) a learning model (using a feature vector based on the skeletal points) using CRF to detect groups of people and outlier person in a scene, and c) a separate learning model using a multi-class Support Vector Machine (SVM) to predict the exact F-formation of the group of people in the current scene and the angle of approach for the viewing robot. The system is evaluated using two data-sets. The results show that the group and outlier detection in a scene using our method establishes an accuracy of 91%. We have made rigorous comparisons of our systems with a state-of-the-art F-formation detection system and found that it outperforms the state-of-the-art by 29% for formation detection and 55% for combined detection of the formation and approach angle.
本文提出了一种从自我视觉相机的连续图像流中实时检测社会群体的新架构。F-formation定义了两个或更多的人倾向于在社交场所进行交流的空间中的社交方向。因此,从本质上讲,我们在会议、讨论等社交聚会中检测f形,并预测机器人想要加入社交群体时的接近角度。此外,我们还检测异常值,即不属于所考虑的群体的人。我们建议的管道由骨骼要点- a)的估计量(共17)在现场,发现人类b)学习模型(使用一个特征向量基于骨架点)使用CRF检测群体和一个场景的例外人,和c)一个单独的学习模型使用多层次支持向量机(SVM)预测的准确F-formation群人在当前场景和方法的角度观看机器人。系统使用两个数据集进行评估。结果表明,使用该方法对场景中的组点和离群点进行检测,准确率达到91%。我们已经将我们的系统与最先进的f-地层探测系统进行了严格的比较,发现它在地层探测方面比最先进的系统好29%,在地层和接近角的综合探测方面比最先进的系统好55%。
{"title":"Let me join you! Real-time F-formation recognition by a socially aware robot","authors":"Hrishav Bakul Barua, Pradip Pramanick, Chayan Sarkar, Theint Haythi Mg","doi":"10.1109/RO-MAN47096.2020.9223469","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223469","url":null,"abstract":"This paper presents a novel architecture to detect social groups in real-time from a continuous image stream of an ego-vision camera. F-formation defines social orientations in space where two or more person tends to communicate in a social place. Thus, essentially, we detect F-formations in social gatherings such as meetings, discussions, etc. and predict the robot’s approach angle if it wants to join the social group. Additionally, we also detect outliers, i.e., the persons who are not part of the group under consideration. Our proposed pipeline consists of – a) a skeletal key points estimator (a total of 17) for the detected human in the scene, b) a learning model (using a feature vector based on the skeletal points) using CRF to detect groups of people and outlier person in a scene, and c) a separate learning model using a multi-class Support Vector Machine (SVM) to predict the exact F-formation of the group of people in the current scene and the angle of approach for the viewing robot. The system is evaluated using two data-sets. The results show that the group and outlier detection in a scene using our method establishes an accuracy of 91%. We have made rigorous comparisons of our systems with a state-of-the-art F-formation detection system and found that it outperforms the state-of-the-art by 29% for formation detection and 55% for combined detection of the formation and approach angle.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130601880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Affective Touch Robots with Changing Textures and Movements 具有改变纹理和动作的情感触摸机器人
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223481
Daiki Sato, Mana Sasagawa, Arinobu Niijima
We explore how to design emotional expression using tabletop social robots with multiple texture modules. Previous studies in human-robot interaction have presented various designs for emotionally expressive robots without using anthropomorphic forms or cues. They revealed that haptic stimulation based on the textures and movements of the robots could evoke some emotions in users, although these were limited. In this work, we propose using a combination of textures and movements for richer emotional expression. We implemented tabletop robots equipped with detachable texture modules made of five different materials (plastic resin, aluminum, clay, Velcro, and cotton) and performed a user study with 13 participants to investigate how they would map the combinations of textures and movements to nine emotions chosen from Russell’s circumplex model. The results indicated that the robots could express various emotions such as excited, happy, calm, and sad. Deeper analysis of these results revealed some interesting relationships between emotional valence/arousal and texture/movement: for example, cold texture played an important role in expressing negative valence, and controlling the frequency of the movements could change the expression of arousal.
我们探索如何使用具有多个纹理模块的桌面社交机器人来设计情感表达。先前的人机交互研究提出了各种设计的情感表达机器人,而不使用拟人化的形式或线索。他们透露,基于机器人的纹理和运动的触觉刺激可以唤起用户的一些情绪,尽管这些情绪是有限的。在这个作品中,我们建议使用纹理和动作的结合来表达更丰富的情感。我们实现了配备可拆卸纹理模块的桌面机器人,这些模块由五种不同的材料(塑料树脂、铝、粘土、尼龙搭扣和棉花)制成,并对13名参与者进行了用户研究,以调查他们如何将纹理和动作的组合映射到罗素的绕圈模型中选择的九种情绪。结果表明,机器人可以表达各种情绪,如兴奋、快乐、平静和悲伤。对这些结果的深入分析揭示了一些有趣的关系:例如,冷纹理在表达负效价中起重要作用,而控制动作频率可以改变情绪唤起的表达。
{"title":"Affective Touch Robots with Changing Textures and Movements","authors":"Daiki Sato, Mana Sasagawa, Arinobu Niijima","doi":"10.1109/RO-MAN47096.2020.9223481","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223481","url":null,"abstract":"We explore how to design emotional expression using tabletop social robots with multiple texture modules. Previous studies in human-robot interaction have presented various designs for emotionally expressive robots without using anthropomorphic forms or cues. They revealed that haptic stimulation based on the textures and movements of the robots could evoke some emotions in users, although these were limited. In this work, we propose using a combination of textures and movements for richer emotional expression. We implemented tabletop robots equipped with detachable texture modules made of five different materials (plastic resin, aluminum, clay, Velcro, and cotton) and performed a user study with 13 participants to investigate how they would map the combinations of textures and movements to nine emotions chosen from Russell’s circumplex model. The results indicated that the robots could express various emotions such as excited, happy, calm, and sad. Deeper analysis of these results revealed some interesting relationships between emotional valence/arousal and texture/movement: for example, cold texture played an important role in expressing negative valence, and controlling the frequency of the movements could change the expression of arousal.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"41 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114018368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Social Drone Sharing to Increase the UAV Patrolling Autonomy in Emergency Scenarios 社会化无人机共享提高无人机在应急场景下的巡逻自主性
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223567
Luca Morando, C. Recchiuto, A. Sgorbissa
Unmanned Aerial Vehicles (UAVs) popularity is increased in recent years, and the domain of application of this new technology is continuously expanding. However, although UAVs may be extremely useful in monitoring contexts, the operational aspects of drone patrolling services have not yet been extensively studied. Specifically, patrolling and inspecting with UAVs different targets distributed over a large area is still an open problem, due to battery constraints and other practical limitations. In this work, we propose a deterministic algorithm for patrolling large areas in a pre- or post-critical event scenario. The autonomy range of UAVs is extended with the concept of Social Drone Sharing: citizens may offer their availability to take care of the UAV if it lands in their private area, being thus strictly involved in the monitoring process. The proposed approach aims at finding optimal routes in this context, minimizing the patrolling time and respecting the battery constraints. Simulation experiments have been conducted, giving some insights about the performance of the proposed method.
近年来,无人机的普及程度不断提高,这项新技术的应用领域也在不断扩大。然而,尽管无人机在监控环境中可能非常有用,但无人机巡逻服务的操作方面尚未得到广泛研究。具体来说,由于电池约束和其他实际限制,无人机在大面积分布的不同目标上巡逻和检查仍然是一个悬而未决的问题。在这项工作中,我们提出了一种确定性算法,用于在关键事件前或关键事件后的大面积巡逻。无人机的自主范围通过社交无人机共享的概念得到扩展:如果无人机降落在他们的私人区域,公民可以提供他们的可用性来照顾无人机,因此严格参与监控过程。提出的方法旨在在这种情况下找到最优路线,最大限度地减少巡逻时间并尊重电池约束。通过仿真实验,对所提方法的性能有了一定的认识。
{"title":"Social Drone Sharing to Increase the UAV Patrolling Autonomy in Emergency Scenarios","authors":"Luca Morando, C. Recchiuto, A. Sgorbissa","doi":"10.1109/RO-MAN47096.2020.9223567","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223567","url":null,"abstract":"Unmanned Aerial Vehicles (UAVs) popularity is increased in recent years, and the domain of application of this new technology is continuously expanding. However, although UAVs may be extremely useful in monitoring contexts, the operational aspects of drone patrolling services have not yet been extensively studied. Specifically, patrolling and inspecting with UAVs different targets distributed over a large area is still an open problem, due to battery constraints and other practical limitations. In this work, we propose a deterministic algorithm for patrolling large areas in a pre- or post-critical event scenario. The autonomy range of UAVs is extended with the concept of Social Drone Sharing: citizens may offer their availability to take care of the UAV if it lands in their private area, being thus strictly involved in the monitoring process. The proposed approach aims at finding optimal routes in this context, minimizing the patrolling time and respecting the battery constraints. Simulation experiments have been conducted, giving some insights about the performance of the proposed method.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122211653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Design of Haptic Gestures for Affective Social Signaling Through a Cushion Interface 通过缓冲界面实现情感社交信号的触觉手势设计
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223434
Eleuda Nuñez, Masakazu Hirokawa, Kenji Suzuki
In computer-mediated communication, the amount of non-verbal cues or social signals that machines can support is still limited. By integrating haptic information into computational systems, it might be possible to give a new dimension to the way people convey social signals in mediated communication. This research aims to distinguish different haptic gestures using a physical interface with a cushion-like form designed as a mediator for remote communication scenarios. The proposed interface can sense the user through the cushion’s deformation data combined with motion data. The contribution of this paper is the following: 1) Regardless of each participant’s particular interpretation of the gesture, the proposed solution can detect eight haptic gestures with more than 80% of accuracy across participants, and 2) The classification of gestures was done without the need of calibration, and independent of the orientation of the cushion. These results represent one step toward the development of affect communication systems that can support haptic gesture classification.
在以计算机为媒介的交流中,机器能够支持的非语言提示或社交信号的数量仍然有限。通过将触觉信息整合到计算系统中,有可能为人们在中介沟通中传递社会信号的方式提供一个新的维度。本研究旨在区分不同的触觉手势使用一个物理界面与一个类似垫子的形式设计为远程通信场景的中介。所提出的界面可以通过坐垫的变形数据与运动数据相结合来感知用户。本文的贡献如下:1)无论每个参与者对手势的特定解释如何,所提出的解决方案都可以检测出8种触觉手势,准确率超过80%;2)手势的分类无需校准,与坐垫的方向无关。这些结果代表着向支持触觉手势分类的情感通信系统的发展迈出了一步。
{"title":"Design of Haptic Gestures for Affective Social Signaling Through a Cushion Interface","authors":"Eleuda Nuñez, Masakazu Hirokawa, Kenji Suzuki","doi":"10.1109/RO-MAN47096.2020.9223434","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223434","url":null,"abstract":"In computer-mediated communication, the amount of non-verbal cues or social signals that machines can support is still limited. By integrating haptic information into computational systems, it might be possible to give a new dimension to the way people convey social signals in mediated communication. This research aims to distinguish different haptic gestures using a physical interface with a cushion-like form designed as a mediator for remote communication scenarios. The proposed interface can sense the user through the cushion’s deformation data combined with motion data. The contribution of this paper is the following: 1) Regardless of each participant’s particular interpretation of the gesture, the proposed solution can detect eight haptic gestures with more than 80% of accuracy across participants, and 2) The classification of gestures was done without the need of calibration, and independent of the orientation of the cushion. These results represent one step toward the development of affect communication systems that can support haptic gesture classification.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132383201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Should robots have accents? 机器人应该有口音吗?
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223599
Ilaria Torre, Sébastien Le Maguer
Accents are vocal features that immediately tell a listener whether a speaker comes from their same place, i.e. whether they share a social group. This in-groupness is important, as people tend to prefer interacting with others who belong to their same groups. Accents also evoke attitudinal responses based on their supposed prestigious status. These accent-based perceptions might affect interactions between humans and robots. Yet, very few studies so far have investigated the effect of accented robot speakers on users’ perceptions and behaviour, and none have collected users’ explicit preferences on robot accents. In this paper we present results from a survey of over 500 British speakers, who indicated what accent they would like a robot to have. The biggest proportion of participants wanted a robot to have a Standard Southern British English (SSBE) accent, followed by an Irish accent. Crucially, very few people wanted a robot with their same accent, or with a machine-like voice. These explicit preferences might not turn out to predict more successful interactions, also because of the unrealistic expectations that such human-like vocal features might generate in a user. Nonetheless, it seems that people have an idea of how their artificial companions should sound like, and this preference should be considered when designing them.
口音是一种声音特征,可以立即告诉听者说话者是否来自同一个地方,也就是说,他们是否属于一个社会群体。这种群体内性很重要,因为人们倾向于与属于同一群体的人互动。口音还会引起基于他们所谓的声望地位的态度反应。这些基于口音的感知可能会影响人类与机器人之间的互动。然而,到目前为止,很少有研究调查带有口音的机器人扬声器对用户感知和行为的影响,也没有研究收集用户对机器人口音的明确偏好。在这篇论文中,我们展示了一项对500多名英国人的调查结果,他们指出了他们希望机器人有什么样的口音。最大比例的参与者希望机器人有标准的英国南部英语(SSBE)口音,其次是爱尔兰口音。关键是,很少有人想要一个和他们口音一样的机器人,或者像机器一样说话的机器人。这些明确的偏好可能无法预测更成功的交互,也因为这种类似人类的声音特征可能会在用户中产生不切实际的期望。尽管如此,人们似乎对自己的人造伴侣听起来应该是什么样子有自己的想法,在设计它们时应该考虑到这种偏好。
{"title":"Should robots have accents?","authors":"Ilaria Torre, Sébastien Le Maguer","doi":"10.1109/RO-MAN47096.2020.9223599","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223599","url":null,"abstract":"Accents are vocal features that immediately tell a listener whether a speaker comes from their same place, i.e. whether they share a social group. This in-groupness is important, as people tend to prefer interacting with others who belong to their same groups. Accents also evoke attitudinal responses based on their supposed prestigious status. These accent-based perceptions might affect interactions between humans and robots. Yet, very few studies so far have investigated the effect of accented robot speakers on users’ perceptions and behaviour, and none have collected users’ explicit preferences on robot accents. In this paper we present results from a survey of over 500 British speakers, who indicated what accent they would like a robot to have. The biggest proportion of participants wanted a robot to have a Standard Southern British English (SSBE) accent, followed by an Irish accent. Crucially, very few people wanted a robot with their same accent, or with a machine-like voice. These explicit preferences might not turn out to predict more successful interactions, also because of the unrealistic expectations that such human-like vocal features might generate in a user. Nonetheless, it seems that people have an idea of how their artificial companions should sound like, and this preference should be considered when designing them.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130121485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
The Role of Social Cues for Goal Disambiguation in Human-Robot Cooperation 社会线索对人机合作中目标消歧的作用
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223546
Samuele Vinanzi, A. Cangelosi, C. Goerick
Social interaction is the new frontier in contemporary robotics: we want to build robots that blend with ease into our daily social environments, following their norms and rules. The cognitive skill that bootstraps social awareness in humans is known as "intention reading" and it allows us to interpret other agents’ actions and assign them meaning. Given its centrality for humans, it is likely that intention reading will foster the development of robotic social understanding. In this paper, we present an artificial cognitive architecture for intention reading in human-robot interaction (HRI) that makes use of social cues to disambiguate goals. This is accomplished by performing a low-level action encoding paired with a high-level probabilistic goal inference. We introduce a new clustering algorithm that has been developed to differentiate multi-sensory human social cues by performing several levels of clustering on different feature-spaces, paired with a Bayesian network that infers the underlying intention. The model has been validated through an interactive HRI experiment involving a joint manipulation game performed by a human and a robotic arm in a toy block scenario. The results show that the artificial agent was capable of reading the intention of its partner and cooperate in mutual interaction, thus validating the novel methodology and the use of social cues to disambiguate goals, other than demonstrating the advantages of intention reading in social HRI.
社交互动是当代机器人技术的新前沿:我们希望制造能够轻松融入我们日常社交环境的机器人,遵循他们的规范和规则。引导人类社会意识的认知技能被称为“意图阅读”,它允许我们解释其他主体的行为并赋予它们意义。考虑到意图阅读对人类的中心地位,它很可能会促进机器人社会理解能力的发展。在本文中,我们提出了一种用于人机交互(HRI)意图阅读的人工认知架构,该架构利用社会线索来消除目标的歧义。这是通过执行低级动作编码和高级概率目标推理来完成的。我们介绍了一种新的聚类算法,该算法通过在不同的特征空间上执行几个级别的聚类来区分多感官的人类社会线索,并与推断潜在意图的贝叶斯网络相结合。该模型已通过交互式HRI实验进行验证,该实验涉及在玩具块场景中由人和机械臂进行的联合操作游戏。结果表明,人工智能体能够读取同伴的意图并在相互互动中进行合作,从而验证了新的方法和使用社会线索来消除目标歧义,而不是展示意图阅读在社会HRI中的优势。
{"title":"The Role of Social Cues for Goal Disambiguation in Human-Robot Cooperation","authors":"Samuele Vinanzi, A. Cangelosi, C. Goerick","doi":"10.1109/RO-MAN47096.2020.9223546","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223546","url":null,"abstract":"Social interaction is the new frontier in contemporary robotics: we want to build robots that blend with ease into our daily social environments, following their norms and rules. The cognitive skill that bootstraps social awareness in humans is known as \"intention reading\" and it allows us to interpret other agents’ actions and assign them meaning. Given its centrality for humans, it is likely that intention reading will foster the development of robotic social understanding. In this paper, we present an artificial cognitive architecture for intention reading in human-robot interaction (HRI) that makes use of social cues to disambiguate goals. This is accomplished by performing a low-level action encoding paired with a high-level probabilistic goal inference. We introduce a new clustering algorithm that has been developed to differentiate multi-sensory human social cues by performing several levels of clustering on different feature-spaces, paired with a Bayesian network that infers the underlying intention. The model has been validated through an interactive HRI experiment involving a joint manipulation game performed by a human and a robotic arm in a toy block scenario. The results show that the artificial agent was capable of reading the intention of its partner and cooperate in mutual interaction, thus validating the novel methodology and the use of social cues to disambiguate goals, other than demonstrating the advantages of intention reading in social HRI.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132801398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A Task Allocation Approach for Human-Robot Collaboration in Product Defects Inspection Scenarios 产品缺陷检测中人机协作的任务分配方法
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223455
Hossein Karami, Kourosh Darvish, F. Mastrogiovanni
The presence and coexistence of human operators and collaborative robots in shop-floor environments raises the need for assigning tasks to either operators or robots, or both. Depending on task characteristics, operator capabilities and the involved robot functionalities, it is of the utmost importance to design strategies allowing for the concurrent and/or sequential allocation of tasks related to object manipulation and assembly. In this paper, we extend the FLEXHRC framework presented in [1] to allow a human operator to interact with multiple, heterogeneous robots at the same time in order to jointly carry out a given task. The extended FLEXHRC framework leverages a concurrent and sequential task representation framework to allocate tasks to either operators or robots as part of a dynamic collaboration process. In particular, we focus on a use case related to the inspection of product defects, which involves a human operator, a dual-arm Baxter manipulator from Rethink Robotics and a Kuka youBot mobile manipulator.
在车间环境中,人类操作员和协作机器人的存在和共存增加了将任务分配给操作员或机器人或两者的需求。根据任务特征、操作员能力和所涉及的机器人功能,设计策略以允许与对象操作和组装相关的任务的并发和/或顺序分配是至关重要的。在本文中,我们扩展了[1]中提出的FLEXHRC框架,以允许人类操作员同时与多个异构机器人交互,以共同执行给定的任务。扩展的FLEXHRC框架利用并发和顺序任务表示框架将任务分配给操作员或机器人,作为动态协作过程的一部分。我们特别关注了一个与产品缺陷检查相关的用例,该用例涉及一名人工操作员,Rethink Robotics的双臂Baxter机械手和Kuka youBot移动机械手。
{"title":"A Task Allocation Approach for Human-Robot Collaboration in Product Defects Inspection Scenarios","authors":"Hossein Karami, Kourosh Darvish, F. Mastrogiovanni","doi":"10.1109/RO-MAN47096.2020.9223455","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223455","url":null,"abstract":"The presence and coexistence of human operators and collaborative robots in shop-floor environments raises the need for assigning tasks to either operators or robots, or both. Depending on task characteristics, operator capabilities and the involved robot functionalities, it is of the utmost importance to design strategies allowing for the concurrent and/or sequential allocation of tasks related to object manipulation and assembly. In this paper, we extend the FLEXHRC framework presented in [1] to allow a human operator to interact with multiple, heterogeneous robots at the same time in order to jointly carry out a given task. The extended FLEXHRC framework leverages a concurrent and sequential task representation framework to allocate tasks to either operators or robots as part of a dynamic collaboration process. In particular, we focus on a use case related to the inspection of product defects, which involves a human operator, a dual-arm Baxter manipulator from Rethink Robotics and a Kuka youBot mobile manipulator.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133093401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Learning prohibited and authorised grasping locations from a few demonstrations 从几个演示中学习禁止和授权抓取的位置
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223486
François Hélénon, Laurent Bimont, E. Nyiri, Stéphane Thiery, O. Gibaru
Our motivation is to ease robots’ reconfiguration for pick and place tasks in an industrial context. This paper proposes a fast learner neural network model trained from one or a few demonstrations in less than 5 minutes, able to efficiently predict grasping locations on a specific object. The proposed methodology is easy to apply in an industrial context as it is exclusively based on the operator’s demonstrations and does not require a CAD model, existing database or simulator. As predictions of a neural network can be erroneous especially when trained with very few data, we propose to indicate both authorised and prohibited locations for safety reasons. It allows us to handle fragile objects or to perform task-oriented grasping. Our model learns the semantic representation of objects (prohibited/authorised) thanks to a simplified data representation, a simplified neural network architecture and an adequate training framework. We trained specific networks for different objects and conducted experiments on a real 7-DOF robot which showed good performances (70 to 100% depending on the object), using only one demonstration. The proposed model is able to generalise well as performances remain good even when grasping several similar objects with the same network trained on one of them.
我们的动机是简化机器人在工业环境中拾取和放置任务的重新配置。本文提出了一种快速学习神经网络模型,在不到5分钟的时间内从一个或几个演示中训练,能够有效地预测特定物体上的抓取位置。所提出的方法很容易应用于工业环境,因为它完全基于操作人员的演示,不需要CAD模型、现有数据库或模拟器。由于神经网络的预测可能是错误的,特别是在数据很少的情况下,出于安全原因,我们建议指出授权和禁止的位置。它使我们能够处理易碎的物体或执行面向任务的抓取。我们的模型通过简化的数据表示、简化的神经网络架构和适当的训练框架来学习对象(禁止/授权)的语义表示。我们针对不同的对象训练了特定的网络,并在一个真实的7-DOF机器人上进行了实验,该机器人表现出了良好的性能(取决于对象的70 - 100%),仅使用了一次演示。所提出的模型能够很好地泛化,即使在抓取几个相似的对象时,在其中一个对象上训练了相同的网络,也能保持良好的性能。
{"title":"Learning prohibited and authorised grasping locations from a few demonstrations","authors":"François Hélénon, Laurent Bimont, E. Nyiri, Stéphane Thiery, O. Gibaru","doi":"10.1109/RO-MAN47096.2020.9223486","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223486","url":null,"abstract":"Our motivation is to ease robots’ reconfiguration for pick and place tasks in an industrial context. This paper proposes a fast learner neural network model trained from one or a few demonstrations in less than 5 minutes, able to efficiently predict grasping locations on a specific object. The proposed methodology is easy to apply in an industrial context as it is exclusively based on the operator’s demonstrations and does not require a CAD model, existing database or simulator. As predictions of a neural network can be erroneous especially when trained with very few data, we propose to indicate both authorised and prohibited locations for safety reasons. It allows us to handle fragile objects or to perform task-oriented grasping. Our model learns the semantic representation of objects (prohibited/authorised) thanks to a simplified data representation, a simplified neural network architecture and an adequate training framework. We trained specific networks for different objects and conducted experiments on a real 7-DOF robot which showed good performances (70 to 100% depending on the object), using only one demonstration. The proposed model is able to generalise well as performances remain good even when grasping several similar objects with the same network trained on one of them.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115541739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Forces and torque measurements in the interaction of kitchen-utensils with food during typical cooking tasks: preliminary test and evaluation 在典型的烹饪任务中,厨房用具与食物相互作用的力和扭矩测量:初步测试和评估
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223457
Débora Pereira, Alessandro Morassut, E. Tiberi, P. Dario, G. Ciuti
The study of cooking tasks, such as grilling, is hindered by several adverse conditions for sensors, such as the proximity to humidity, fat, and heat. Still, robotics research could benefit from understanding the human control of forces and torques in important contact interactions of kitchen-utensils with food. This work presents a preliminary study on the dynamics of grilling tasks (i.e. food flipping movements). A spatula and kitchen-tweezers were instrumented to measure forces and torque in multiple directions. Furthermore, we designed an experimental setup to keep sensors distant from heat/humidity and to, simultaneously, hold the effects of grilling (stickiness/slipperiness) during the tasks execution and recording. This allowed a successful data collection of 1426 movements with the spatula (flipping hamburgers, chicken, zucchini and eggplant slices) and 660 movements with the tweezers (flipping zucchini and eggplant slices), performed by chefs and ordinary home cooks. Finally, we analyzed three dynamical characteristics of the tasks for the different food: bending force and torsion torque on the impact to unstick food, and maximum pinching with tweezers. We verified that bending on impact and maximum pinching are adjusted to the food by both chefs and home cooks.
对烹饪任务的研究,如烧烤,会受到几个不利条件的阻碍,如接近湿度、脂肪和热量。尽管如此,了解人类在厨房用具与食物的重要接触互动中对力和扭矩的控制,仍可使机器人研究受益。这项工作提出了一个初步的研究动力学的烧烤任务(即食物翻转运动)。用刮刀和厨房镊子测量多个方向的力和扭矩。此外,我们设计了一个实验装置,使传感器远离热量/湿度,同时在任务执行和记录过程中保持烤(粘/滑)的效果。这使得厨师和普通家庭厨师成功地收集了1426个用锅铲(翻转汉堡包、鸡肉、西葫芦和茄子片)和660个用镊子(翻转西葫芦和茄子片)的动作数据。最后,我们分析了不同食物任务的三个动力学特性:弯曲力和扭转力矩对食物脱粘的影响,以及镊子的最大夹紧。我们证实,弯曲的冲击和最大的挤压是由厨师和家庭厨师调整的食物。
{"title":"Forces and torque measurements in the interaction of kitchen-utensils with food during typical cooking tasks: preliminary test and evaluation","authors":"Débora Pereira, Alessandro Morassut, E. Tiberi, P. Dario, G. Ciuti","doi":"10.1109/RO-MAN47096.2020.9223457","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223457","url":null,"abstract":"The study of cooking tasks, such as grilling, is hindered by several adverse conditions for sensors, such as the proximity to humidity, fat, and heat. Still, robotics research could benefit from understanding the human control of forces and torques in important contact interactions of kitchen-utensils with food. This work presents a preliminary study on the dynamics of grilling tasks (i.e. food flipping movements). A spatula and kitchen-tweezers were instrumented to measure forces and torque in multiple directions. Furthermore, we designed an experimental setup to keep sensors distant from heat/humidity and to, simultaneously, hold the effects of grilling (stickiness/slipperiness) during the tasks execution and recording. This allowed a successful data collection of 1426 movements with the spatula (flipping hamburgers, chicken, zucchini and eggplant slices) and 660 movements with the tweezers (flipping zucchini and eggplant slices), performed by chefs and ordinary home cooks. Finally, we analyzed three dynamical characteristics of the tasks for the different food: bending force and torsion torque on the impact to unstick food, and maximum pinching with tweezers. We verified that bending on impact and maximum pinching are adjusted to the food by both chefs and home cooks.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124109962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Interactive Robotic Systems as Boundary-Crossing Robots – the User’s View* 交互式机器人系统作为跨界机器人——用户视角*
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223575
Kentaro Watanabe, K. Jokinen
Social robots are receiving more attention through increased research and development, and they are gradually becoming a part of our daily lives. In this study, we investigated how social robots are accepted by robot users. We applied the theoretical lens of the boundary-crossing robot concept, which describes the role shift of robots from tools to agents. This concept highlights the impact of social robots on the everyday lives of humans, and can be used to structure the development of perceived interactions between robots and human users. In this paper, we report on the results of a web questionnaire study conducted among users of interactive devices (humanoid robots, animal robots, and smart speakers). Their acceptance and roles in daily life are compared from both functional and affective perspectives, with respect to their perceived roles as boundary-crossing robots.
随着研发力度的加大,社交机器人正受到越来越多的关注,它们正逐渐成为我们日常生活的一部分。在这项研究中,我们调查了社交机器人是如何被机器人用户接受的。我们运用了跨界机器人概念的理论视角,描述了机器人从工具到智能体的角色转变。这一概念强调了社交机器人对人类日常生活的影响,并可用于构建机器人与人类用户之间感知交互的发展。在这篇论文中,我们报告了一项对交互式设备(人形机器人、动物机器人和智能扬声器)用户进行的网络问卷研究的结果。从功能和情感的角度比较了他们在日常生活中的接受程度和角色,以及他们作为跨界机器人的感知角色。
{"title":"Interactive Robotic Systems as Boundary-Crossing Robots – the User’s View*","authors":"Kentaro Watanabe, K. Jokinen","doi":"10.1109/RO-MAN47096.2020.9223575","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223575","url":null,"abstract":"Social robots are receiving more attention through increased research and development, and they are gradually becoming a part of our daily lives. In this study, we investigated how social robots are accepted by robot users. We applied the theoretical lens of the boundary-crossing robot concept, which describes the role shift of robots from tools to agents. This concept highlights the impact of social robots on the everyday lives of humans, and can be used to structure the development of perceived interactions between robots and human users. In this paper, we report on the results of a web questionnaire study conducted among users of interactive devices (humanoid robots, animal robots, and smart speakers). Their acceptance and roles in daily life are compared from both functional and affective perspectives, with respect to their perceived roles as boundary-crossing robots.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114316826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1