首页 > 最新文献

2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)最新文献

英文 中文
The Impact of Social Robots in Education: Moral Considerations of Dutch Educational Policymakers 社会机器人在教育中的影响:荷兰教育政策制定者的道德考量
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223582
Matthijs H. J. Smakman, J. Berket, E. Konijn
Social robots are increasingly studied and applied in the educational domain. Although they hold great potential for education, they also bring new moral challenges. In this study, we explored the moral considerations related to social robots from the perspective of Dutch educational policymakers by first identifying opportunities and concerns and then mapping them onto (moral) values from the literature. To explore their moral considerations, we conducted focus group sessions with Dutch Educational Policymakers (N = 20). Considerations varied from the potential to lower the workload of teachers, to concerns related to the increased influence of commercial enterprises on the educational system. In total, the considerations of the policymakers related to 15 theoretical values. We identified the moral considerations of educational policymakers to gain a better understanding of a governmental attitude towards the use of social robots. This helps to create the necessary moral guidelines towards a responsible implementation of social robots in education.
社交机器人在教育领域得到越来越多的研究和应用。虽然它们在教育方面具有巨大的潜力,但它们也带来了新的道德挑战。在这项研究中,我们从荷兰教育政策制定者的角度探讨了与社交机器人相关的道德考虑,首先确定了机会和关注点,然后从文献中将它们映射到(道德)价值观。为了探讨他们的道德考虑,我们与荷兰教育政策制定者(N = 20)进行了焦点小组会议。考虑因素各不相同,从降低教师工作量的潜力到与商业企业对教育系统的影响增加有关的关切。总的来说,政策制定者的考量涉及15个理论值。我们确定了教育政策制定者的道德考虑,以更好地理解政府对使用社交机器人的态度。这有助于建立必要的道德准则,以负责任的方式在教育中实施社交机器人。
{"title":"The Impact of Social Robots in Education: Moral Considerations of Dutch Educational Policymakers","authors":"Matthijs H. J. Smakman, J. Berket, E. Konijn","doi":"10.1109/RO-MAN47096.2020.9223582","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223582","url":null,"abstract":"Social robots are increasingly studied and applied in the educational domain. Although they hold great potential for education, they also bring new moral challenges. In this study, we explored the moral considerations related to social robots from the perspective of Dutch educational policymakers by first identifying opportunities and concerns and then mapping them onto (moral) values from the literature. To explore their moral considerations, we conducted focus group sessions with Dutch Educational Policymakers (N = 20). Considerations varied from the potential to lower the workload of teachers, to concerns related to the increased influence of commercial enterprises on the educational system. In total, the considerations of the policymakers related to 15 theoretical values. We identified the moral considerations of educational policymakers to gain a better understanding of a governmental attitude towards the use of social robots. This helps to create the necessary moral guidelines towards a responsible implementation of social robots in education.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122568304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Physiological Data-Based Evaluation of a Social Robot Navigation System 基于生理数据的社交机器人导航系统评价
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223539
Hasan Kivrak, Pinar Uluer, Hatice Kose, E. Gümüslü, D. Erol, Furkan Çakmak, S. Yavuz
The aim of this work is to create a social navigation system for an affective robot that acts as an assistant in the audiology department of hospitals for children with hearing impairments. Compared to traditional navigation systems, this system differentiates between objects and human beings and optimizes several parameters to keep at a social distance during motion when faced with humans not to interfere with their personal zones. For this purpose, social robot motion planning algorithms are employed to generate human-friendly paths that maintain humans’ safety and comfort during the robot’s navigation. This paper evaluates this system compared to traditional navigation, based on the surveys and physiological data of the adult participants in a preliminary study before using the system with children. Although the self-report questionnaires do not show any significant difference between navigation profiles of the robot, analysis of the physiological data may be interpreted that, the participants felt comfortable and less threatened in social navigation case.
这项工作的目的是为一个情感机器人创建一个社交导航系统,作为医院听力障碍儿童听力学部门的助手。与传统的导航系统相比,该系统区分了物体和人,并优化了几个参数,在运动过程中面对人类时保持一定的社交距离,不干扰他们的个人区域。为此,采用社交机器人运动规划算法生成人性化路径,保证机器人在导航过程中人类的安全和舒适。本文在对儿童使用该系统之前,基于对成人参与者的调查和生理数据的初步研究,将该系统与传统导航系统进行了比较。虽然自述问卷并未显示机器人的导航概况之间存在显著差异,但分析生理数据可以解释为,在社交导航情况下,参与者感到舒适,受到的威胁较小。
{"title":"Physiological Data-Based Evaluation of a Social Robot Navigation System","authors":"Hasan Kivrak, Pinar Uluer, Hatice Kose, E. Gümüslü, D. Erol, Furkan Çakmak, S. Yavuz","doi":"10.1109/RO-MAN47096.2020.9223539","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223539","url":null,"abstract":"The aim of this work is to create a social navigation system for an affective robot that acts as an assistant in the audiology department of hospitals for children with hearing impairments. Compared to traditional navigation systems, this system differentiates between objects and human beings and optimizes several parameters to keep at a social distance during motion when faced with humans not to interfere with their personal zones. For this purpose, social robot motion planning algorithms are employed to generate human-friendly paths that maintain humans’ safety and comfort during the robot’s navigation. This paper evaluates this system compared to traditional navigation, based on the surveys and physiological data of the adult participants in a preliminary study before using the system with children. Although the self-report questionnaires do not show any significant difference between navigation profiles of the robot, analysis of the physiological data may be interpreted that, the participants felt comfortable and less threatened in social navigation case.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"9 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129924896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Understanding Human-Robot Collaboration for People with Mobility Impairments at the Workplace, a Thematic Analysis 了解工作场所行动障碍人士的人机协作,专题分析
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223489
S. A. Arboleda, Max Pascher, Younes Lakhnati, J. Gerken
Assistive technologies such as human-robot collaboration, have the potential to ease the life of people with physical mobility impairments in social and economic activities. Currently, this group of people has lower rates of economic participation, due to the lack of adequate environments adapted to their capabilities. We take a closer look at the needs and preferences of people with physical mobility impairments in a human-robot cooperative environment at the workplace. Specifically, we aim to design how to control a robotic arm in manufacturing tasks for people with physical mobility impairments. We present a case study of a sheltered-workshop as a prototype for an institution that employs people with disabilities in manufacturing jobs. Here, we collected data of potential end-users with physical mobility impairments, social workers, and supervisors using a participatory design technique (Future-Workshop). These stakeholders were divided into two groups, primary (end-users) and secondary users (social workers, supervisors), which were run across two separate sessions. The gathered information was analyzed using thematic analysis to reveal underlying themes across stakeholders. We identified concepts that highlight underlying concerns related to the robot fitting in the social and organizational structure, human-robot synergy, and human-robot problem management. In this paper, we present our findings and discuss the implications of each theme when shaping an inclusive human-robot cooperative workstation for people with physical mobility impairments.
辅助技术,如人机协作,有可能在社会和经济活动中缓解身体行动不便的人的生活。目前,由于缺乏适应他们能力的适当环境,这一群体的经济参与率较低。我们仔细研究了在工作场所的人机合作环境中,身体行动不便的人的需求和偏好。具体来说,我们的目标是设计如何控制机械臂来为身体行动不便的人制造任务。我们提出了一个庇护车间的案例研究,作为一个雇佣残疾人从事制造业工作的机构的原型。在这里,我们使用参与式设计技术(Future-Workshop)收集了有肢体活动障碍的潜在终端用户、社会工作者和主管的数据。这些利益相关者被分为两组,主要(最终用户)和次要用户(社会工作者,主管),这是在两个单独的会议上进行的。使用主题分析来分析收集到的信息,以揭示利益相关者之间的潜在主题。我们确定了一些概念,这些概念突出了与机器人在社会和组织结构中适应、人机协同和人机问题管理相关的潜在问题。在本文中,我们展示了我们的研究结果,并讨论了每个主题在为身体行动障碍患者塑造包容性人机合作工作站时的含义。
{"title":"Understanding Human-Robot Collaboration for People with Mobility Impairments at the Workplace, a Thematic Analysis","authors":"S. A. Arboleda, Max Pascher, Younes Lakhnati, J. Gerken","doi":"10.1109/RO-MAN47096.2020.9223489","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223489","url":null,"abstract":"Assistive technologies such as human-robot collaboration, have the potential to ease the life of people with physical mobility impairments in social and economic activities. Currently, this group of people has lower rates of economic participation, due to the lack of adequate environments adapted to their capabilities. We take a closer look at the needs and preferences of people with physical mobility impairments in a human-robot cooperative environment at the workplace. Specifically, we aim to design how to control a robotic arm in manufacturing tasks for people with physical mobility impairments. We present a case study of a sheltered-workshop as a prototype for an institution that employs people with disabilities in manufacturing jobs. Here, we collected data of potential end-users with physical mobility impairments, social workers, and supervisors using a participatory design technique (Future-Workshop). These stakeholders were divided into two groups, primary (end-users) and secondary users (social workers, supervisors), which were run across two separate sessions. The gathered information was analyzed using thematic analysis to reveal underlying themes across stakeholders. We identified concepts that highlight underlying concerns related to the robot fitting in the social and organizational structure, human-robot synergy, and human-robot problem management. In this paper, we present our findings and discuss the implications of each theme when shaping an inclusive human-robot cooperative workstation for people with physical mobility impairments.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122900380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interactions 什么时候你会相信机器人?人机交互中的信任与心理理论研究
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223551
Wenxuan Mou, Martina Ruocco, Debora Zanatto, A. Cangelosi
Trust is a critical issue in human-robot interactions (HRI) as it is the core of human desire to accept and use a non-human agent. Theory of Mind (ToM) has been defined as the ability to understand the beliefs and intentions of others that may differ from one’s own. Evidences in psychology and HRI suggest that trust and ToM are interconnected and interdependent concepts, as the decision to trust another agent must depend on our own representation of this entity’s actions, beliefs and intentions. However, very few works take ToM of the robot into consideration while studying trust in HRI. In this paper, we investigated whether the exposure to the ToM abilities of a robot could affect humans’ trust towards the robot. To this end, participants played a Price Game with a humanoid robot (Pepper) that was presented having either low-level ToM or high-level ToM. Specifically, the participants were asked to accept the price evaluations on common objects presented by the robot. The willingness of the participants to change their own price judgement of the objects (i.e., accept the price the robot suggested) was used as the main measurement of the trust towards the robot. Our experimental results showed that robots possessing a high-level of ToM abilities were trusted more than the robots presented with low-level ToM skills.
信任是人机交互(HRI)中的一个关键问题,因为它是人类接受和使用非人类代理的核心愿望。心理理论(ToM)被定义为理解他人的信仰和意图的能力,这些信仰和意图可能与自己不同。心理学和人力资源研究所的证据表明,信任和ToM是相互联系和相互依赖的概念,因为信任另一个代理的决定必须依赖于我们自己对这个实体的行为、信念和意图的表征。然而,在研究HRI中的信任时,很少有作品考虑到机器人的ToM。在本文中,我们研究了暴露于机器人的ToM能力是否会影响人类对机器人的信任。为此,参与者与一个人形机器人(Pepper)玩了一个价格游戏,这个机器人有低级的ToM,也有高级的ToM。具体来说,参与者被要求接受机器人对常见物品的价格评估。参与者改变自己对物品价格判断的意愿(即接受机器人建议的价格)被用作对机器人信任的主要衡量标准。我们的实验结果表明,具有高水平ToM技能的机器人比具有低水平ToM技能的机器人更受信任。
{"title":"When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interactions","authors":"Wenxuan Mou, Martina Ruocco, Debora Zanatto, A. Cangelosi","doi":"10.1109/RO-MAN47096.2020.9223551","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223551","url":null,"abstract":"Trust is a critical issue in human-robot interactions (HRI) as it is the core of human desire to accept and use a non-human agent. Theory of Mind (ToM) has been defined as the ability to understand the beliefs and intentions of others that may differ from one’s own. Evidences in psychology and HRI suggest that trust and ToM are interconnected and interdependent concepts, as the decision to trust another agent must depend on our own representation of this entity’s actions, beliefs and intentions. However, very few works take ToM of the robot into consideration while studying trust in HRI. In this paper, we investigated whether the exposure to the ToM abilities of a robot could affect humans’ trust towards the robot. To this end, participants played a Price Game with a humanoid robot (Pepper) that was presented having either low-level ToM or high-level ToM. Specifically, the participants were asked to accept the price evaluations on common objects presented by the robot. The willingness of the participants to change their own price judgement of the objects (i.e., accept the price the robot suggested) was used as the main measurement of the trust towards the robot. Our experimental results showed that robots possessing a high-level of ToM abilities were trusted more than the robots presented with low-level ToM skills.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127747192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
PredGaze: A Incongruity Prediction Model for User’s Gaze Movement PredGaze:一个用户注视运动的不一致性预测模型
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223525
Y. Otsuka, Shohei Akita, Kohei Okuoka, Mitsuhiko Kimoto, M. Imai
With digital signage and communication robots, digital agents have gradually become popular and will become more popular. It is important to make humans notice the intentions of agents throughout the interaction between them. This paper is focused on the gaze behavior of an agent and the phenomenon that if the gaze behavior of an agent is different from human expectations, human will have a incongruity and feel the existence of the agent’s intention behind the behavioral changes instinctively. We propose PredGaze, a model of estimating this incongruity which humans have according to the shift in gaze behavior from the human’s expectations. In particular, PredGaze uses the variance in the agent behavior model to express how well humans sense the behavioral tendency of the agent. We expect that this variance will improve the estimation of the incongruity. PredGaze uses three variables to estimate the internal state of how much a human senses the agent’s intention: error, confidence, and incongruity. To evaluate the effectiveness of PredGaze with these three variables, we conducted an experiment to investigate the effects of the timing of gaze behavior change and incongruity. The experimental results indicated that there were significant differences in the subjective scores of the naturalness of agents and incongruity with agents according to the difference in the timing of the agent’s change in its gaze behavior.
随着数字标牌和通信机器人的出现,数字代理已经逐渐普及,并将变得更加普及。重要的是要让人类在它们之间的交互过程中注意到代理的意图。本文主要研究了智能体的凝视行为,以及当智能体的凝视行为与人类的期望不同时,人类会产生一种不协调感,本能地感受到行为变化背后存在着智能体意图的现象。我们提出了PredGaze模型,该模型可以根据注视行为从人类期望的转变来估计人类的这种不一致性。特别是,PredGaze使用代理行为模型中的方差来表达人类对代理行为倾向的感知程度。我们期望这种方差将改善对不一致性的估计。PredGaze使用三个变量来估计人类感知代理意图的内部状态:错误、信心和不一致性。为了利用这三个变量来评估PredGaze的有效性,我们进行了一项实验,研究了凝视行为改变和不一致的时间对PredGaze的影响。实验结果表明,主体注视行为变化的时间不同,主体的自然性和与主体的不一致性的主观得分也有显著差异。
{"title":"PredGaze: A Incongruity Prediction Model for User’s Gaze Movement","authors":"Y. Otsuka, Shohei Akita, Kohei Okuoka, Mitsuhiko Kimoto, M. Imai","doi":"10.1109/RO-MAN47096.2020.9223525","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223525","url":null,"abstract":"With digital signage and communication robots, digital agents have gradually become popular and will become more popular. It is important to make humans notice the intentions of agents throughout the interaction between them. This paper is focused on the gaze behavior of an agent and the phenomenon that if the gaze behavior of an agent is different from human expectations, human will have a incongruity and feel the existence of the agent’s intention behind the behavioral changes instinctively. We propose PredGaze, a model of estimating this incongruity which humans have according to the shift in gaze behavior from the human’s expectations. In particular, PredGaze uses the variance in the agent behavior model to express how well humans sense the behavioral tendency of the agent. We expect that this variance will improve the estimation of the incongruity. PredGaze uses three variables to estimate the internal state of how much a human senses the agent’s intention: error, confidence, and incongruity. To evaluate the effectiveness of PredGaze with these three variables, we conducted an experiment to investigate the effects of the timing of gaze behavior change and incongruity. The experimental results indicated that there were significant differences in the subjective scores of the naturalness of agents and incongruity with agents according to the difference in the timing of the agent’s change in its gaze behavior.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121365993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On the Expressivity of a Parametric Humanoid Emotion Model 参数化类人情感模型的表达性研究
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223459
Pooja Prajod, K. Hindriks
Emotion expression is an important part of human-robot interaction. Previous studies typically focused on a small set of emotions and a single channel to express them. We developed an emotion expression model that modulates motion, poses and LED features parametrically, using valence and arousal values. This model does not interrupt the task or gesture being performed and hence can be used in combination with functional behavioural expressions. Even though our model is relatively simple, it is just as capable of expressing emotions as other more complicated models that have been proposed in the literature. We systematically explored the expressivity of our model and found that a parametric model using 5 key motion and pose features can be used to effectively express emotions in the two quadrants where valence and arousal have the same sign. As paradigmatic examples, we tested for happy, excited, sad and tired. By adding a second channel (eye LEDs), the model is also able to express high arousal (anger) and low arousal (relaxed) emotions in the two other quadrants. Our work supports other findings that it remains hard to express moderate arousal emotions in these quadrants for both negative (fear) and positive (content) valence.
情感表达是人机交互的重要组成部分。以前的研究通常集中在一小部分情绪和单一的表达渠道上。我们开发了一个情绪表达模型,该模型使用价态和唤醒值来参数化调节运动、姿势和LED特征。该模型不会中断正在执行的任务或手势,因此可以与功能性行为表达结合使用。尽管我们的模型相对简单,但它与文献中提出的其他更复杂的模型一样能够表达情感。我们系统地探索了模型的表达能力,发现使用5个关键动作和姿势特征的参数化模型可以有效地表达价和唤醒具有相同符号的两个象限中的情绪。作为典型的例子,我们测试了快乐,兴奋,悲伤和疲倦。通过添加第二个通道(眼睛led),该模型还能够在另外两个象限中表达高唤醒(愤怒)和低唤醒(放松)的情绪。我们的工作支持了其他发现,即在这些象限中,无论是消极(恐惧)效价还是积极(内容)效价,都很难表达适度的唤醒情绪。
{"title":"On the Expressivity of a Parametric Humanoid Emotion Model","authors":"Pooja Prajod, K. Hindriks","doi":"10.1109/RO-MAN47096.2020.9223459","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223459","url":null,"abstract":"Emotion expression is an important part of human-robot interaction. Previous studies typically focused on a small set of emotions and a single channel to express them. We developed an emotion expression model that modulates motion, poses and LED features parametrically, using valence and arousal values. This model does not interrupt the task or gesture being performed and hence can be used in combination with functional behavioural expressions. Even though our model is relatively simple, it is just as capable of expressing emotions as other more complicated models that have been proposed in the literature. We systematically explored the expressivity of our model and found that a parametric model using 5 key motion and pose features can be used to effectively express emotions in the two quadrants where valence and arousal have the same sign. As paradigmatic examples, we tested for happy, excited, sad and tired. By adding a second channel (eye LEDs), the model is also able to express high arousal (anger) and low arousal (relaxed) emotions in the two other quadrants. Our work supports other findings that it remains hard to express moderate arousal emotions in these quadrants for both negative (fear) and positive (content) valence.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128616717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Virtual Reality based Telerobotics Framework with Depth Cameras 基于虚拟现实的深度相机遥控机器人框架
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223445
Bukeikhan Omarali, Brice D. Denoun, K. Althoefer, L. Jamone, Maurizio Valle, I. Farkhatdinov
This work describes a virtual reality (VR) based robot teleoperation framework which relies on scene visualization from depth cameras and implements human-robot and human-scene interaction gestures. We suggest that mounting a camera on a slave robot’s end-effector (an in-hand camera) allows the operator to achieve better visualization of the remote scene and improve task performance. We compared experimentally the operator’s ability to understand the remote environment in different visualization modes: single external static camera, in-hand camera, in-hand and external static camera, in-hand camera with OctoMap occupancy mapping. The latter option provided the operator with a better understanding of the remote environment whilst requiring relatively small communication bandwidth. Consequently, we propose suitable grasping methods compatible with the VR based teleoperation with the in-hand camera. Video demonstration: https://youtu.be/3vZaEykMS_E.
这项工作描述了一个基于虚拟现实(VR)的机器人远程操作框架,该框架依赖于深度摄像机的场景可视化,并实现了人机和人机场景交互手势。我们建议在从机器人的末端执行器(手持相机)上安装摄像头,使操作员能够更好地实现远程场景的可视化并提高任务性能。我们实验比较了操作员在不同的可视化模式下对远程环境的理解能力:单个外部静态相机,手持相机,手持和外部静态相机,手持相机与OctoMap占用映射。后一种方案为运营商提供了对远程环境更好的了解,同时需要相对较小的通信带宽。因此,我们提出了适合于基于VR的手持相机远程操作的抓取方法。视频演示:https://youtu.be/3vZaEykMS_E。
{"title":"Virtual Reality based Telerobotics Framework with Depth Cameras","authors":"Bukeikhan Omarali, Brice D. Denoun, K. Althoefer, L. Jamone, Maurizio Valle, I. Farkhatdinov","doi":"10.1109/RO-MAN47096.2020.9223445","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223445","url":null,"abstract":"This work describes a virtual reality (VR) based robot teleoperation framework which relies on scene visualization from depth cameras and implements human-robot and human-scene interaction gestures. We suggest that mounting a camera on a slave robot’s end-effector (an in-hand camera) allows the operator to achieve better visualization of the remote scene and improve task performance. We compared experimentally the operator’s ability to understand the remote environment in different visualization modes: single external static camera, in-hand camera, in-hand and external static camera, in-hand camera with OctoMap occupancy mapping. The latter option provided the operator with a better understanding of the remote environment whilst requiring relatively small communication bandwidth. Consequently, we propose suitable grasping methods compatible with the VR based teleoperation with the in-hand camera. Video demonstration: https://youtu.be/3vZaEykMS_E.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128826603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
A Two-Layered Approach to Adaptive Dialogues for Robotic Assistance 机器人辅助自适应对话的两层方法
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223605
Riccardo De Benedictis, A. Umbrico, Francesca Fracasso, Gabriella Cortellessa, Andrea Orlandini, A. Cesta
Socially assistive robots should provide users with personalized assistance within a wide range of scenarios such as hospitals, home or social settings and private houses. Different people may have different needs both at the cognitive/physical support level and in relation to the preferences of interaction. Consequently the typology of tasks and the way the assistance is delivered can change according to the person with whom the robot is interacting. The authors’ long-term research goal is the realization of an advanced cognitive system able to support multiple assistive scenarios with adaptations over time. We here show how the integration of model-based and model-free AI technologies can contextualize robot assistive behaviors and dynamically decide what to do (assistive plan) and how to do it (assistive plan execution), according to the different features and needs of assisted persons. Although the approach is general, the paper specifically focuses on the synthesis of personalized therapies for (cognitive) stimulation of users.
社交辅助机器人应该在医院、家庭或社交环境以及私人住宅等广泛的场景中为用户提供个性化的帮助。不同的人可能在认知/身体支持层面和互动偏好方面有不同的需求。因此,任务的类型和提供帮助的方式可以根据与机器人交互的人而改变。作者的长期研究目标是实现一种先进的认知系统,能够随着时间的推移支持多种辅助场景。我们在这里展示了基于模型和无模型的人工智能技术的集成如何将机器人的辅助行为情境化,并根据被辅助人的不同特征和需求,动态地决定做什么(辅助计划)和怎么做(辅助计划执行)。虽然该方法是通用的,但本文特别关注用户(认知)刺激的个性化治疗的综合。
{"title":"A Two-Layered Approach to Adaptive Dialogues for Robotic Assistance","authors":"Riccardo De Benedictis, A. Umbrico, Francesca Fracasso, Gabriella Cortellessa, Andrea Orlandini, A. Cesta","doi":"10.1109/RO-MAN47096.2020.9223605","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223605","url":null,"abstract":"Socially assistive robots should provide users with personalized assistance within a wide range of scenarios such as hospitals, home or social settings and private houses. Different people may have different needs both at the cognitive/physical support level and in relation to the preferences of interaction. Consequently the typology of tasks and the way the assistance is delivered can change according to the person with whom the robot is interacting. The authors’ long-term research goal is the realization of an advanced cognitive system able to support multiple assistive scenarios with adaptations over time. We here show how the integration of model-based and model-free AI technologies can contextualize robot assistive behaviors and dynamically decide what to do (assistive plan) and how to do it (assistive plan execution), according to the different features and needs of assisted persons. Although the approach is general, the paper specifically focuses on the synthesis of personalized therapies for (cognitive) stimulation of users.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"458 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115867870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Human-Robot Artistic Co-Creation: a Study in Improvised Robot Dance 人-机器人艺术协同创作:即兴机器人舞蹈研究
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223446
Oscar Thörn, Peter Knudsen, A. Saffiotti
Joint artistic performance, like music, dance or acting, provides an excellent domain to observe the mechanisms of human-human collaboration. In this paper, we use this domain to study human-robot collaboration and co-creation. We propose a general model in which an AI system mediates the interaction between a human performer and a robotic performer. We then instantiate this model in a case study, implemented using fuzzy logic techniques, in which a human pianist performs jazz improvisations, and a robot dancer performs classical dancing patterns in harmony with the artistic moods expressed by the human. The resulting system has been evaluated in an extensive user study, and successfully demonstrated in public live performances.
联合艺术表演,如音乐、舞蹈或表演,提供了一个很好的领域来观察人与人之间的合作机制。在本文中,我们利用这个领域来研究人机协作和共同创造。我们提出了一个通用模型,其中人工智能系统调解人类表演者和机器人表演者之间的互动。然后,我们在一个案例研究中实例化了这个模型,使用模糊逻辑技术实现,其中一个人类钢琴家演奏爵士乐即兴演奏,一个机器人舞者表演古典舞蹈模式,与人类表达的艺术情绪和谐一致。由此产生的系统已在广泛的用户研究中进行了评估,并成功地在公共现场表演中进行了演示。
{"title":"Human-Robot Artistic Co-Creation: a Study in Improvised Robot Dance","authors":"Oscar Thörn, Peter Knudsen, A. Saffiotti","doi":"10.1109/RO-MAN47096.2020.9223446","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223446","url":null,"abstract":"Joint artistic performance, like music, dance or acting, provides an excellent domain to observe the mechanisms of human-human collaboration. In this paper, we use this domain to study human-robot collaboration and co-creation. We propose a general model in which an AI system mediates the interaction between a human performer and a robotic performer. We then instantiate this model in a case study, implemented using fuzzy logic techniques, in which a human pianist performs jazz improvisations, and a robot dancer performs classical dancing patterns in harmony with the artistic moods expressed by the human. The resulting system has been evaluated in an extensive user study, and successfully demonstrated in public live performances.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"384 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115974786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
WalkingBot: Modular Interactive Legged Robot with Automated Structure Sensing and Motion Planning WalkingBot:具有自动结构传感和运动规划的模块化交互式腿式机器人
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223474
Meng Wang, Yao Su, Hangxin Liu, Ying-Qing Xu
This paper presents WalkingBot, a modular robot system that allows non-expert users to build a multi-legged robot in various morphologies using a set of building blocks with sensors and actuators embedded. The kinematic model of the built robot is interpreted automatically and revealed in a customized GUI through an integrated hardware and software design, so that users can understand, control, and program the robot easily. A Model Predictive Control (MPC) scheme is introduced to generate a control policy for various motions (e.g. moving forward, turning left) corresponding to the sensed robot structure, affording rich robot motions right after assembling. Targeting different levels of programming skill, two programming methods, visual block programming and events programming, are also presented to enable users to create their own interactive legged robot.
本文介绍了WalkingBot,一个模块化机器人系统,允许非专业用户使用一组内置传感器和执行器的构建块来构建各种形态的多腿机器人。通过硬件和软件的集成设计,将机器人的运动学模型自动解释并显示在定制的GUI中,使用户可以轻松地理解、控制和编程机器人。引入模型预测控制(MPC)方案,根据感知到的机器人结构生成各种运动(如向前移动、向左转弯)的控制策略,从而在装配后立即提供丰富的机器人运动。针对不同层次的编程技能,提出了可视化块编程和事件编程两种编程方法,使用户能够创建自己的交互式腿式机器人。
{"title":"WalkingBot: Modular Interactive Legged Robot with Automated Structure Sensing and Motion Planning","authors":"Meng Wang, Yao Su, Hangxin Liu, Ying-Qing Xu","doi":"10.1109/RO-MAN47096.2020.9223474","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223474","url":null,"abstract":"This paper presents WalkingBot, a modular robot system that allows non-expert users to build a multi-legged robot in various morphologies using a set of building blocks with sensors and actuators embedded. The kinematic model of the built robot is interpreted automatically and revealed in a customized GUI through an integrated hardware and software design, so that users can understand, control, and program the robot easily. A Model Predictive Control (MPC) scheme is introduced to generate a control policy for various motions (e.g. moving forward, turning left) corresponding to the sensed robot structure, affording rich robot motions right after assembling. Targeting different levels of programming skill, two programming methods, visual block programming and events programming, are also presented to enable users to create their own interactive legged robot.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116193195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1