首页 > 最新文献

2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)最新文献

英文 中文
Dirichlet-based Dynamic Movement Primitives for encoding periodic motions with predefined accuracy 基于dirichlet的动态运动原语,以预定义的精度编码周期运动
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900755
Dimitrios Papageorgiou, D. Argiropoulos, Z. Doulgeri
In this work, the utilization of Dirichlet (periodic sinc) base functions in DMPs for encoding periodic motions is proposed. By utilizing such kernels, we are able to analytically compute the minimum required number of kernels based only on the predefined accuracy, which is a hyperparameter that can be intuitively selected. The computation of the minimum required number of kernels is based on the frequency content of the demonstrated motion. The learning procedure essentially consists of the sampling of the demonstrated trajectory. The approach is validated through simulations and experiments with the KUKA LWR4+ robot, which show that utilizing the automatically calculated number of basis functions, the pre-defined accuracy is achieved by the proposed DMP model.
在这项工作中,Dirichlet(周期sinc)基函数在dmp中用于编码周期运动。通过利用这些核,我们能够仅基于预定义的精度解析计算所需的最小核数,这是一个可以直观选择的超参数。最小所需核数的计算是基于所演示的运动的频率内容。学习过程基本上包括对演示轨迹的采样。通过KUKA LWR4+机器人的仿真和实验验证了该方法的有效性,结果表明,利用自动计算的基函数个数,所提出的DMP模型达到了预定的精度。
{"title":"Dirichlet-based Dynamic Movement Primitives for encoding periodic motions with predefined accuracy","authors":"Dimitrios Papageorgiou, D. Argiropoulos, Z. Doulgeri","doi":"10.1109/RO-MAN53752.2022.9900755","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900755","url":null,"abstract":"In this work, the utilization of Dirichlet (periodic sinc) base functions in DMPs for encoding periodic motions is proposed. By utilizing such kernels, we are able to analytically compute the minimum required number of kernels based only on the predefined accuracy, which is a hyperparameter that can be intuitively selected. The computation of the minimum required number of kernels is based on the frequency content of the demonstrated motion. The learning procedure essentially consists of the sampling of the demonstrated trajectory. The approach is validated through simulations and experiments with the KUKA LWR4+ robot, which show that utilizing the automatically calculated number of basis functions, the pre-defined accuracy is achieved by the proposed DMP model.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123308514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impacts of Teaching towards Training Gesture Recognizers for Human-Robot Interaction 教学对训练人机交互手势识别器的影响
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900774
Jianxun Tan, Wesley P. Chan, Nicole L. Robinson, D. Kulić, E. Croft
The use of hand-based gestures has been proposed as an intuitive way for people to communicate with robots. Typically the set of gestures is defined by the experimenter. However, existing works do not necessarily focus on gestures that are communicative, and it is unclear whether the selected gesture are actually intuitive to users. This paper investigates whether different people inherently use similar gestures to convey the same commands to robots, and how teaching of gestures when collecting demonstrations for training recognizers can improve resulting accuracy. We conducted this work in two stages. In Stage 1, we conducted an online user study (n=190) to investigate if people use similar gestures to communicate the same set of given commands to a robot when no guidance or training was given. Results revealed large variations in the gestures used among individuals With the absences of training. Training a gesture recognizer using this dataset resulted in an accuracy of around 20%. In response to this, Stage 2 involved proposing a common set of gestures for the commands. We taught these gestures through demonstrations and collected ~ 7500 videos of gestures from study participants to train another gesture recognition model. Initial results showed improved accuracy but a number of gestures had high confusion rates. Refining our gesture set and recognition model by removing those gestures, We achieved an final accuracy of 84.1 ± 2.4%. We integrated the gesture recognition model into the ROS framework and demonstrated a use case, where a person commands a robot to perform a pick and place task using the gesture set.
人们提出使用手势作为人类与机器人交流的一种直观方式。通常,这组手势是由实验者定义的。然而,现有的工作并不一定关注具有交流功能的手势,所选择的手势对用户来说是否真的直观,目前还不清楚。本文研究了不同的人是否天生使用相似的手势向机器人传达相同的命令,以及在为训练识别器收集演示时如何教授手势可以提高结果的准确性。我们分两个阶段进行这项工作。在第一阶段,我们进行了一项在线用户研究(n=190),以调查在没有指导或训练的情况下,人们是否会使用类似的手势向机器人传达相同的给定命令。结果显示,在缺乏训练的情况下,个体使用的手势差异很大。使用该数据集训练手势识别器的准确率约为20%。作为回应,第二阶段涉及为命令提出一组通用手势。我们通过演示来教授这些手势,并从研究参与者那里收集了约7500个手势视频来训练另一个手势识别模型。最初的结果显示准确率有所提高,但一些手势的混淆率很高。通过去除这些手势,改进我们的手势集和识别模型,我们获得了84.1±2.4%的最终准确率。我们将手势识别模型集成到ROS框架中,并演示了一个用例,其中一个人命令机器人使用手势集执行拾取和放置任务。
{"title":"Impacts of Teaching towards Training Gesture Recognizers for Human-Robot Interaction","authors":"Jianxun Tan, Wesley P. Chan, Nicole L. Robinson, D. Kulić, E. Croft","doi":"10.1109/RO-MAN53752.2022.9900774","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900774","url":null,"abstract":"The use of hand-based gestures has been proposed as an intuitive way for people to communicate with robots. Typically the set of gestures is defined by the experimenter. However, existing works do not necessarily focus on gestures that are communicative, and it is unclear whether the selected gesture are actually intuitive to users. This paper investigates whether different people inherently use similar gestures to convey the same commands to robots, and how teaching of gestures when collecting demonstrations for training recognizers can improve resulting accuracy. We conducted this work in two stages. In Stage 1, we conducted an online user study (n=190) to investigate if people use similar gestures to communicate the same set of given commands to a robot when no guidance or training was given. Results revealed large variations in the gestures used among individuals With the absences of training. Training a gesture recognizer using this dataset resulted in an accuracy of around 20%. In response to this, Stage 2 involved proposing a common set of gestures for the commands. We taught these gestures through demonstrations and collected ~ 7500 videos of gestures from study participants to train another gesture recognition model. Initial results showed improved accuracy but a number of gestures had high confusion rates. Refining our gesture set and recognition model by removing those gestures, We achieved an final accuracy of 84.1 ± 2.4%. We integrated the gesture recognition model into the ROS framework and demonstrated a use case, where a person commands a robot to perform a pick and place task using the gesture set.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122993475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Giving Social Robots a Conversational Memory for Motivational Experience Sharing 为社交机器人提供会话记忆以进行动机经验分享
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900677
Avinash Saravanan, Maria Tsfasman, Mark Antonius Neerincx, Catharine Oertel
In ongoing and consecutive conversations with persons, a social robot has to determine which aspects to remember and how to address them in the conversation. In the health domain, important aspects concern the health-related goals, the experienced progress (expressed sentiment) and the ongoing motivation to pursue them. Despite the progress in speech technology and conversational agents, most social robots lack a memory for such experience sharing. This paper presents the design and evaluation of a conversational memory for personalized behavior change support conversations on healthy nutrition via memory-based motivational rephrasing. The main hypothesis is that referring to previous sessions improves motivation and goal attainment, particularly when references vary. In addition, the paper explores how far motivational rephrasing affects user’s perception of the conversational agent (the virtual Furhat). An experiment with 79 participants was conducted via Zoom, consisting of three conversation sessions. The results showed a significant increase in participants’ change in motivation when multiple references to previous sessions were provided.
在与人进行的连续对话中,社交机器人必须确定要记住哪些方面,以及如何在对话中解决这些问题。在健康领域,重要的方面涉及与健康有关的目标、经历的进展(表达的情感)和追求这些目标的持续动机。尽管语音技术和会话代理取得了进步,但大多数社交机器人缺乏这种经验分享的记忆。本文介绍了一种基于记忆的动机复述的健康营养个性化行为改变支持会话记忆的设计和评估。主要的假设是,参考以前的会议可以提高动机和目标的实现,特别是当参考资料不同时。此外,本文还探讨了动机性复述在多大程度上影响用户对会话代理(虚拟Furhat)的感知。通过Zoom进行了一项有79名参与者的实验,包括三个对话环节。结果显示,当提供多个先前会议的参考时,参与者的动机变化显著增加。
{"title":"Giving Social Robots a Conversational Memory for Motivational Experience Sharing","authors":"Avinash Saravanan, Maria Tsfasman, Mark Antonius Neerincx, Catharine Oertel","doi":"10.1109/RO-MAN53752.2022.9900677","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900677","url":null,"abstract":"In ongoing and consecutive conversations with persons, a social robot has to determine which aspects to remember and how to address them in the conversation. In the health domain, important aspects concern the health-related goals, the experienced progress (expressed sentiment) and the ongoing motivation to pursue them. Despite the progress in speech technology and conversational agents, most social robots lack a memory for such experience sharing. This paper presents the design and evaluation of a conversational memory for personalized behavior change support conversations on healthy nutrition via memory-based motivational rephrasing. The main hypothesis is that referring to previous sessions improves motivation and goal attainment, particularly when references vary. In addition, the paper explores how far motivational rephrasing affects user’s perception of the conversational agent (the virtual Furhat). An experiment with 79 participants was conducted via Zoom, consisting of three conversation sessions. The results showed a significant increase in participants’ change in motivation when multiple references to previous sessions were provided.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126294296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Differences and Biases in Mentalizing About Humans and Robots 人类和机器人心智化的差异和偏见
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900849
Sophie Husemann, Jan Pöppel, S. Kopp
Theory of Mind is the process of ascribing mental states to other individuals we interact with. It is used for sense-making of the observed actions and prediction of future actions. Previous studies revealed that humans mentalize about artificial agents, but it is not entirely clear how and to what extent. At the same time mentalizing about humans is often influenced by biases such as an egocentric bias. We present a study investigating differences in participants’ ToM and their susceptibility to an egocentric bias when observing humans vs robots. The participants observed an autonomous robot, a controlled robot, and a human in the same scenarios. The agents had to find an object in a laboratory. While watching the agents, participants had to make several action predictions as an implicit measure of ToM, potentially revealing an egocentric bias. At the end, questions about the agent’s responsibility, awareness and strategy were asked. The results indicate that while participants generally performed ToM for all types of agents, both the scenario as well as the agent type appear to influence participants’ likelihood of exhibiting an egocentric bias.
心理理论是将心理状态归因于与我们互动的其他人的过程。它用于对观察到的行为进行意义构建和对未来行为的预测。先前的研究表明,人类对人工代理有心理认知,但并不完全清楚如何以及在多大程度上。与此同时,对人类的心理认识经常受到偏见的影响,比如自我中心偏见。我们提出了一项研究,调查参与者在观察人类和机器人时ToM的差异以及他们对自我中心偏见的敏感性。参与者在相同的场景中观察一个自主机器人、一个受控机器人和一个人。特工们必须在实验室里找到一个物体。在观看代理的同时,参与者必须做出几个行动预测,作为对ToM的隐性衡量,这可能揭示出一种自我中心偏见。最后,询问代理人的责任、意识和策略。结果表明,虽然参与者通常对所有类型的代理执行ToM,但场景和代理类型似乎都影响参与者表现出自我中心偏见的可能性。
{"title":"Differences and Biases in Mentalizing About Humans and Robots","authors":"Sophie Husemann, Jan Pöppel, S. Kopp","doi":"10.1109/RO-MAN53752.2022.9900849","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900849","url":null,"abstract":"Theory of Mind is the process of ascribing mental states to other individuals we interact with. It is used for sense-making of the observed actions and prediction of future actions. Previous studies revealed that humans mentalize about artificial agents, but it is not entirely clear how and to what extent. At the same time mentalizing about humans is often influenced by biases such as an egocentric bias. We present a study investigating differences in participants’ ToM and their susceptibility to an egocentric bias when observing humans vs robots. The participants observed an autonomous robot, a controlled robot, and a human in the same scenarios. The agents had to find an object in a laboratory. While watching the agents, participants had to make several action predictions as an implicit measure of ToM, potentially revealing an egocentric bias. At the end, questions about the agent’s responsibility, awareness and strategy were asked. The results indicate that while participants generally performed ToM for all types of agents, both the scenario as well as the agent type appear to influence participants’ likelihood of exhibiting an egocentric bias.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121077155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Questioning Wizard of Oz: Effects of Revealing the Wizard behind the Robot 《绿野仙踪:揭露机器人背后的巫师的影响
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900718
Jauwairia Nasir, Pierre Oppliger, Barbara Bruno, P. Dillenbourg
Wizard of Oz, a very commonly employed technique in human-robot interaction, faces the criticism of being deceptive as the humans interacting with the robot are told, if at all, only at the end of their interaction that there was in fact a human behind the robot. What if the robot reveals the wizard behind itself very early in the interaction? We built a deep wizard of Oz setup to allow for a robot to play together with a human against a computer AI in the context of Connect 4 game. This cooperative game interaction against a common opponent is then followed by a conversation between the human and the robot. We conducted an exploratory user study with 29 adults with three conditions where the robot reveals the wizard, lies about the wizard, and does not say anything, respectively. We also split the data based on how the participants perceive the robot in terms of autonomy. Using different metrics, we evaluate how the users interact with and perceive the robot in both the experimental and perceived conditions. We find that while there is indeed a significant difference in the participants willingness to follow robots suggestions between the experimental conditions as well as in the effort they put to prove themselves as humans (reverse Turing test), there isn’t any significant difference in their robot perception. Additionally, how humans perceive whether the robot is tele-operated or autonomous seems to be indifferent to the robot revealing its identity, i.e., the pre-conceived notions may be uninfluenced even if the robot explicitly states otherwise. Lastly, interestingly in the perception based conditions, absence of statistical significance may suggest that, in certain contexts, wizard of oz may not require hiding the wizard after all.
《绿野仙踪》(Wizard of Oz)是人机交互中非常常用的一种技术,它面临着欺骗性的批评,因为人类与机器人交互时,即使有,也只是在交互结束时才被告知机器人背后实际上有一个人。如果机器人在交互过程中很早就揭示了自己背后的向导会怎样?我们在《绿野仙踪》中创建了一个深度设置,允许机器人与人类一起对抗电脑AI。这种对抗共同对手的合作游戏互动之后是人类和机器人之间的对话。我们对29名成年人进行了一项探索性的用户研究,在三种情况下,机器人分别透露了向导,对向导撒谎,什么也没说。我们还根据参与者如何感知机器人的自主性来分割数据。使用不同的指标,我们评估用户在实验和感知条件下如何与机器人交互和感知机器人。我们发现,虽然在实验条件下,参与者遵循机器人建议的意愿以及他们证明自己是人类(反向图灵测试)的努力确实存在显著差异,但他们对机器人的感知没有显著差异。此外,人类如何感知机器人是远程操作的还是自主的,似乎与机器人揭示其身份无关,也就是说,即使机器人明确说明了其他情况,先入为主的观念也可能不受影响。最后,有趣的是,在基于感知的条件下,缺乏统计显著性可能表明,在某些情况下,《绿野仙踪》可能根本不需要隐藏向导。
{"title":"Questioning Wizard of Oz: Effects of Revealing the Wizard behind the Robot","authors":"Jauwairia Nasir, Pierre Oppliger, Barbara Bruno, P. Dillenbourg","doi":"10.1109/RO-MAN53752.2022.9900718","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900718","url":null,"abstract":"Wizard of Oz, a very commonly employed technique in human-robot interaction, faces the criticism of being deceptive as the humans interacting with the robot are told, if at all, only at the end of their interaction that there was in fact a human behind the robot. What if the robot reveals the wizard behind itself very early in the interaction? We built a deep wizard of Oz setup to allow for a robot to play together with a human against a computer AI in the context of Connect 4 game. This cooperative game interaction against a common opponent is then followed by a conversation between the human and the robot. We conducted an exploratory user study with 29 adults with three conditions where the robot reveals the wizard, lies about the wizard, and does not say anything, respectively. We also split the data based on how the participants perceive the robot in terms of autonomy. Using different metrics, we evaluate how the users interact with and perceive the robot in both the experimental and perceived conditions. We find that while there is indeed a significant difference in the participants willingness to follow robots suggestions between the experimental conditions as well as in the effort they put to prove themselves as humans (reverse Turing test), there isn’t any significant difference in their robot perception. Additionally, how humans perceive whether the robot is tele-operated or autonomous seems to be indifferent to the robot revealing its identity, i.e., the pre-conceived notions may be uninfluenced even if the robot explicitly states otherwise. Lastly, interestingly in the perception based conditions, absence of statistical significance may suggest that, in certain contexts, wizard of oz may not require hiding the wizard after all.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121004888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exploring requirements and opportunities for social robots in primary mathematics education 探索社交机器人在小学数学教育中的需求与机遇
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900569
L. Elloumi, Marianne Bossema, S. M. D. Droog, Matthijs H. J. Smakman, S. V. Ginkel, M. Ligthart, K. Hoogland, K. Hindriks, S. B. Allouch
Social robots have been introduced in different fields such as retail, health care and education. Primary education in the Netherlands (and elsewhere) recently faced new challenges because of the COVID-19 pandemic, lockdowns and quarantines including students falling behind and teachers burdened with high workloads. Together with two Dutch municipalities and nine primary schools we are exploring the long-term use of social robots to study how social robots might support teachers in primary education, with a focus on mathematics education. This paper presents an explorative study to define requirements for a social robot math tutor. Multiple focus groups were held with the two main stakeholders, namely teachers and students. During the focus groups the aim was 1) to understand the current situation of mathematics education in the upper primary school level, 2) to identify the problems that teachers and students encounter in mathematics education, and 3) to identify opportunities for deploying a social robot math tutor in primary education from the perspective of both the teachers and students. The results inform the development of social robots and opportunities for pedagogical methods used in math teaching, child-robot interaction and potential support for teachers in the classroom.
社交机器人已被引入零售、医疗和教育等不同领域。由于COVID-19大流行,封锁和隔离,荷兰(和其他地方)的初等教育最近面临新的挑战,包括学生落后和教师负担沉重的工作量。我们与两个荷兰市政当局和九所小学一起,正在探索社交机器人的长期使用,以研究社交机器人如何支持小学教育教师,重点是数学教育。本文提出了一项探索性研究,以确定社交机器人数学导师的需求。与教师和学生这两个主要利益相关者举行了多个焦点小组讨论。在焦点小组期间,目的是1)了解小学高年级数学教育的现状,2)确定教师和学生在数学教育中遇到的问题,3)从教师和学生的角度确定在小学教育中部署社交机器人数学导师的机会。研究结果为社交机器人的发展提供了信息,并为数学教学、儿童机器人互动以及课堂上对教师的潜在支持提供了机会。
{"title":"Exploring requirements and opportunities for social robots in primary mathematics education","authors":"L. Elloumi, Marianne Bossema, S. M. D. Droog, Matthijs H. J. Smakman, S. V. Ginkel, M. Ligthart, K. Hoogland, K. Hindriks, S. B. Allouch","doi":"10.1109/RO-MAN53752.2022.9900569","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900569","url":null,"abstract":"Social robots have been introduced in different fields such as retail, health care and education. Primary education in the Netherlands (and elsewhere) recently faced new challenges because of the COVID-19 pandemic, lockdowns and quarantines including students falling behind and teachers burdened with high workloads. Together with two Dutch municipalities and nine primary schools we are exploring the long-term use of social robots to study how social robots might support teachers in primary education, with a focus on mathematics education. This paper presents an explorative study to define requirements for a social robot math tutor. Multiple focus groups were held with the two main stakeholders, namely teachers and students. During the focus groups the aim was 1) to understand the current situation of mathematics education in the upper primary school level, 2) to identify the problems that teachers and students encounter in mathematics education, and 3) to identify opportunities for deploying a social robot math tutor in primary education from the perspective of both the teachers and students. The results inform the development of social robots and opportunities for pedagogical methods used in math teaching, child-robot interaction and potential support for teachers in the classroom.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116691182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Benchmarking deep neural networks for gesture recognition on embedded devices * 嵌入式设备上手势识别的深度神经网络基准测试*
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900705
Stefano Bini, Antonio Greco, Alessia Saggese, M. Vento
The gesture is one of the most used forms of communication between humans; in recent years, given the new trend of factories to be adapted to Industry 4.0 paradigm, the scientific community has shown a growing interest towards the design of Gesture Recognition (GR) algorithms for Human-Robot Interaction (HRI) applications. Within this context, the GR algorithm needs to work in real time and over embedded platforms, with limited resources. Anyway, when looking at the available scientific literature, the aim of the different proposed neural networks (i.e. 2D and 3D) and of the different modalities used for feeding the network (i.e. RGB, RGB-D, optical flow) is typically the optimization of the accuracy, without strongly paying attention to the feasibility over low power hardware devices. Anyway, the analysis related to the trade-off between accuracy and computational burden (for both networks and modalities) becomes important so as to allow GR algorithms to work in industrial robotics applications. In this paper, we perform a wide benchmarking focusing not only on the accuracy but also on the computational burden, involving two different architectures (2D and 3D), with two different backbones (MobileNet, ResNeXt) and four types of input modalities (RGB, Depth, Optical Flow, Motion History Image) and their combinations.
手势是人类之间最常用的交流方式之一;近年来,鉴于工厂适应工业4.0范式的新趋势,科学界对人机交互(HRI)应用的手势识别(GR)算法的设计表现出越来越大的兴趣。在这种情况下,GR算法需要在资源有限的嵌入式平台上实时工作。无论如何,当查看现有的科学文献时,不同提出的神经网络(即2D和3D)和用于馈电网络的不同模式(即RGB, RGB- d,光流)的目的通常是优化精度,而不是强烈关注低功耗硬件设备的可行性。无论如何,与精度和计算负担(对于网络和模式)之间的权衡相关的分析变得重要,以便允许GR算法在工业机器人应用中工作。在本文中,我们进行了广泛的基准测试,不仅关注准确性,还关注计算负担,涉及两种不同的架构(2D和3D),两种不同的骨干(MobileNet, ResNeXt)和四种类型的输入模式(RGB, Depth,光流,运动历史图像)及其组合。
{"title":"Benchmarking deep neural networks for gesture recognition on embedded devices *","authors":"Stefano Bini, Antonio Greco, Alessia Saggese, M. Vento","doi":"10.1109/RO-MAN53752.2022.9900705","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900705","url":null,"abstract":"The gesture is one of the most used forms of communication between humans; in recent years, given the new trend of factories to be adapted to Industry 4.0 paradigm, the scientific community has shown a growing interest towards the design of Gesture Recognition (GR) algorithms for Human-Robot Interaction (HRI) applications. Within this context, the GR algorithm needs to work in real time and over embedded platforms, with limited resources. Anyway, when looking at the available scientific literature, the aim of the different proposed neural networks (i.e. 2D and 3D) and of the different modalities used for feeding the network (i.e. RGB, RGB-D, optical flow) is typically the optimization of the accuracy, without strongly paying attention to the feasibility over low power hardware devices. Anyway, the analysis related to the trade-off between accuracy and computational burden (for both networks and modalities) becomes important so as to allow GR algorithms to work in industrial robotics applications. In this paper, we perform a wide benchmarking focusing not only on the accuracy but also on the computational burden, involving two different architectures (2D and 3D), with two different backbones (MobileNet, ResNeXt) and four types of input modalities (RGB, Depth, Optical Flow, Motion History Image) and their combinations.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116556059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
EvaSIM: a Software Simulator for the EVA Open-source Robotics Platform EVA开源机器人平台的软件模拟器
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900561
M. Rocha, Dagoberto Cruz-Sandoval, J. Favela, D. Muchaluat-Saade
Socially Assistive Robots (SARs) have successfully been used in various types of health therapies as non-pharmacological interventions. A SAR called EVA (Embodied Voice Assistant) is an open-source robotics platform intended to serve as a tool to support research in Human-Robot Interaction. The EVA robot was originally developed to assist in non-pharmacological interventions for people with Dementia and has more recently been applied for children with Autism Spectrum Disorder. EVA provides multimodal interactions such as verbal and non-verbal communication, facial recognition and light sensory effects. Although EVA uses low-cost hardware and open-source software, it is not always possible, or practical, to have a physical robot at hand, particularly during rapid iterative cycles of design and evaluation of therapies. Thus, our motivation to develop a simulator that allows testing the scripts of therapies to be enacted by the EVA robot. This work proposes EvaSIM (EVA Robot Simulator), a simulator that can interpret an EVA script code and emulate the multimodal interaction capabilities of the physical robot, such as Text-To-Speech, facial expression recognition, controlling light sensory effects, etc. Several EVA scripts were run using the simulator attesting that they have the same behaviour as the physical robot. EvaSIM can serve as a support tool in the teaching/learning process of the robot’s scripting language, enabling the training of technicians and therapists in script development and testing for the EVA robot.
社会辅助机器人(SARs)已成功地用于各种类型的健康治疗作为非药物干预。一个名为EVA(嵌入式语音助手)的SAR是一个开源机器人平台,旨在作为支持人机交互研究的工具。EVA机器人最初是为了帮助痴呆症患者进行非药物干预而开发的,最近被应用于自闭症谱系障碍儿童。EVA提供多模态交互,如语言和非语言交流、面部识别和光感官效果。尽管EVA使用低成本的硬件和开源软件,但手头上有一个物理机器人并不总是可能的或实际的,特别是在设计和评估疗法的快速迭代周期中。因此,我们的动机是开发一个模拟器,允许测试EVA机器人制定的治疗脚本。这项工作提出了EvaSIM (EVA Robot Simulator),这是一个可以解释EVA脚本代码并模拟物理机器人的多模态交互能力的模拟器,如文本到语音、面部表情识别、控制光感效果等。使用模拟器运行了几个EVA脚本,证明它们具有与物理机器人相同的行为。EvaSIM可以作为机器人脚本语言教学/学习过程中的辅助工具,使技术人员和治疗师能够在脚本开发和EVA机器人测试方面进行培训。
{"title":"EvaSIM: a Software Simulator for the EVA Open-source Robotics Platform","authors":"M. Rocha, Dagoberto Cruz-Sandoval, J. Favela, D. Muchaluat-Saade","doi":"10.1109/RO-MAN53752.2022.9900561","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900561","url":null,"abstract":"Socially Assistive Robots (SARs) have successfully been used in various types of health therapies as non-pharmacological interventions. A SAR called EVA (Embodied Voice Assistant) is an open-source robotics platform intended to serve as a tool to support research in Human-Robot Interaction. The EVA robot was originally developed to assist in non-pharmacological interventions for people with Dementia and has more recently been applied for children with Autism Spectrum Disorder. EVA provides multimodal interactions such as verbal and non-verbal communication, facial recognition and light sensory effects. Although EVA uses low-cost hardware and open-source software, it is not always possible, or practical, to have a physical robot at hand, particularly during rapid iterative cycles of design and evaluation of therapies. Thus, our motivation to develop a simulator that allows testing the scripts of therapies to be enacted by the EVA robot. This work proposes EvaSIM (EVA Robot Simulator), a simulator that can interpret an EVA script code and emulate the multimodal interaction capabilities of the physical robot, such as Text-To-Speech, facial expression recognition, controlling light sensory effects, etc. Several EVA scripts were run using the simulator attesting that they have the same behaviour as the physical robot. EvaSIM can serve as a support tool in the teaching/learning process of the robot’s scripting language, enabling the training of technicians and therapists in script development and testing for the EVA robot.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122594430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fuzzy Based Control of a Flexible Bevel-Tip Needle for Percutaneous Interventions 用于经皮介入治疗的柔性斜尖针的模糊控制
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900829
K. Halder, M. F. Orlando, R. S. Anand
In Minimal Invasive Surgical procedures, flexible bevel tip needles are widely used for percutaneous interventions due to the advantage of enhancing the target reaching accuracy. Here, the target reaching accuracy suffers due to tissue in-homogeneity, deformation in tissue domain and improper steering techniques. The main objective of the percutaneous interventional procedures is ensuring patient safety and reaching desired target position accurately. Several researchers have al-ready developed many approaches to control the needle steering for precise target reaching. To overcome complex approaches in existing controllers, we have proposed a fuzzy based controller to regulate the needle in a specified plane. Our designed method involves the needle non-holonomic constraints based kinematics inside tissue domain and Lyapunov analysis based fuzzy rule base for fuzzy inference system which ensures the closed loop stability of needling system for percutaneous interventional procedures. We have also validated our designed control scheme through extensive simulations and experimentation in biological tissue.
在微创外科手术中,由于具有提高靶到达精度的优点,柔性斜尖针被广泛用于经皮介入手术。在这里,由于组织的非均匀性、组织域的变形和不适当的转向技术,目标到达精度受到影响。经皮介入手术的主要目的是确保患者的安全和准确到达预期的靶位。一些研究人员已经开发了许多方法来控制针的转向,以达到精确的目标。为了克服现有控制器的复杂方法,我们提出了一种基于模糊的控制器来在指定平面上调节针。设计了基于组织域内针非完整约束和基于Lyapunov分析的模糊规则库的模糊推理系统,保证了经皮介入穿刺系统的闭环稳定性。我们还通过大量的生物组织模拟和实验验证了我们设计的控制方案。
{"title":"Fuzzy Based Control of a Flexible Bevel-Tip Needle for Percutaneous Interventions","authors":"K. Halder, M. F. Orlando, R. S. Anand","doi":"10.1109/RO-MAN53752.2022.9900829","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900829","url":null,"abstract":"In Minimal Invasive Surgical procedures, flexible bevel tip needles are widely used for percutaneous interventions due to the advantage of enhancing the target reaching accuracy. Here, the target reaching accuracy suffers due to tissue in-homogeneity, deformation in tissue domain and improper steering techniques. The main objective of the percutaneous interventional procedures is ensuring patient safety and reaching desired target position accurately. Several researchers have al-ready developed many approaches to control the needle steering for precise target reaching. To overcome complex approaches in existing controllers, we have proposed a fuzzy based controller to regulate the needle in a specified plane. Our designed method involves the needle non-holonomic constraints based kinematics inside tissue domain and Lyapunov analysis based fuzzy rule base for fuzzy inference system which ensures the closed loop stability of needling system for percutaneous interventional procedures. We have also validated our designed control scheme through extensive simulations and experimentation in biological tissue.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1185 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131444837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
AugRE: Augmented Robot Environment to Facilitate Human-Robot Teaming and Communication * AugRE:增强机器人环境,促进人机合作和沟通*
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900721
Frank Regal, Christina Petlowany, Can Pehlivanturk, C. V. Sice, C. Suarez, Blake Anderson, M. Pryor
Augmented Reality (AR) provides a method to superimpose real-time information on the physical world. AR is well-suited for complex robotic systems to help users understand robot behavior, status, and intent. This paper presents an AR system, Augmented Robot Environment (AugRE), that combines ROS-based robotic systems with Microsoft HoloLens 2 AR headsets to form a scalable multi-agent human-robot teaming system for indoor and outdoor exploration. The system allows multiple users to simultaneously localize, supervise, and receive labeled images from robotic clients. An overview of AugRE and details of the novel system architecture that allows for large-scale human-robot teaming is presented below. Studies showcasing system performance with multiple robotic clients are presented. Results show that AugRE can scale to 50 robotic clients with minimal performance degradation, due in part to key components that leverage a recent advancement in robotic client-to-client communication called Robofleet. Finally we discuss new capabilities enabled by AugRE.
增强现实(AR)提供了一种将实时信息叠加在物理世界上的方法。AR非常适合复杂的机器人系统,可以帮助用户了解机器人的行为、状态和意图。本文提出了一个增强机器人环境(AugRE)系统,该系统将基于ros的机器人系统与微软HoloLens 2增强现实头显相结合,形成一个可扩展的多智能体人机团队系统,用于室内和室外探索。该系统允许多个用户同时定位、监督和接收来自机器人客户端的标记图像。下面介绍AugRE的概述和允许大规模人机合作的新型系统架构的细节。研究展示了系统性能与多个机器人客户端。结果表明,AugRE可以扩展到50个机器人客户端,而性能下降最小,部分原因是关键组件利用了机器人客户端到客户端通信的最新进展,称为Robofleet。最后,我们讨论了AugRE支持的新功能。
{"title":"AugRE: Augmented Robot Environment to Facilitate Human-Robot Teaming and Communication *","authors":"Frank Regal, Christina Petlowany, Can Pehlivanturk, C. V. Sice, C. Suarez, Blake Anderson, M. Pryor","doi":"10.1109/RO-MAN53752.2022.9900721","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900721","url":null,"abstract":"Augmented Reality (AR) provides a method to superimpose real-time information on the physical world. AR is well-suited for complex robotic systems to help users understand robot behavior, status, and intent. This paper presents an AR system, Augmented Robot Environment (AugRE), that combines ROS-based robotic systems with Microsoft HoloLens 2 AR headsets to form a scalable multi-agent human-robot teaming system for indoor and outdoor exploration. The system allows multiple users to simultaneously localize, supervise, and receive labeled images from robotic clients. An overview of AugRE and details of the novel system architecture that allows for large-scale human-robot teaming is presented below. Studies showcasing system performance with multiple robotic clients are presented. Results show that AugRE can scale to 50 robotic clients with minimal performance degradation, due in part to key components that leverage a recent advancement in robotic client-to-client communication called Robofleet. Finally we discuss new capabilities enabled by AugRE.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133192218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1