Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900755
Dimitrios Papageorgiou, D. Argiropoulos, Z. Doulgeri
In this work, the utilization of Dirichlet (periodic sinc) base functions in DMPs for encoding periodic motions is proposed. By utilizing such kernels, we are able to analytically compute the minimum required number of kernels based only on the predefined accuracy, which is a hyperparameter that can be intuitively selected. The computation of the minimum required number of kernels is based on the frequency content of the demonstrated motion. The learning procedure essentially consists of the sampling of the demonstrated trajectory. The approach is validated through simulations and experiments with the KUKA LWR4+ robot, which show that utilizing the automatically calculated number of basis functions, the pre-defined accuracy is achieved by the proposed DMP model.
{"title":"Dirichlet-based Dynamic Movement Primitives for encoding periodic motions with predefined accuracy","authors":"Dimitrios Papageorgiou, D. Argiropoulos, Z. Doulgeri","doi":"10.1109/RO-MAN53752.2022.9900755","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900755","url":null,"abstract":"In this work, the utilization of Dirichlet (periodic sinc) base functions in DMPs for encoding periodic motions is proposed. By utilizing such kernels, we are able to analytically compute the minimum required number of kernels based only on the predefined accuracy, which is a hyperparameter that can be intuitively selected. The computation of the minimum required number of kernels is based on the frequency content of the demonstrated motion. The learning procedure essentially consists of the sampling of the demonstrated trajectory. The approach is validated through simulations and experiments with the KUKA LWR4+ robot, which show that utilizing the automatically calculated number of basis functions, the pre-defined accuracy is achieved by the proposed DMP model.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123308514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900774
Jianxun Tan, Wesley P. Chan, Nicole L. Robinson, D. Kulić, E. Croft
The use of hand-based gestures has been proposed as an intuitive way for people to communicate with robots. Typically the set of gestures is defined by the experimenter. However, existing works do not necessarily focus on gestures that are communicative, and it is unclear whether the selected gesture are actually intuitive to users. This paper investigates whether different people inherently use similar gestures to convey the same commands to robots, and how teaching of gestures when collecting demonstrations for training recognizers can improve resulting accuracy. We conducted this work in two stages. In Stage 1, we conducted an online user study (n=190) to investigate if people use similar gestures to communicate the same set of given commands to a robot when no guidance or training was given. Results revealed large variations in the gestures used among individuals With the absences of training. Training a gesture recognizer using this dataset resulted in an accuracy of around 20%. In response to this, Stage 2 involved proposing a common set of gestures for the commands. We taught these gestures through demonstrations and collected ~ 7500 videos of gestures from study participants to train another gesture recognition model. Initial results showed improved accuracy but a number of gestures had high confusion rates. Refining our gesture set and recognition model by removing those gestures, We achieved an final accuracy of 84.1 ± 2.4%. We integrated the gesture recognition model into the ROS framework and demonstrated a use case, where a person commands a robot to perform a pick and place task using the gesture set.
{"title":"Impacts of Teaching towards Training Gesture Recognizers for Human-Robot Interaction","authors":"Jianxun Tan, Wesley P. Chan, Nicole L. Robinson, D. Kulić, E. Croft","doi":"10.1109/RO-MAN53752.2022.9900774","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900774","url":null,"abstract":"The use of hand-based gestures has been proposed as an intuitive way for people to communicate with robots. Typically the set of gestures is defined by the experimenter. However, existing works do not necessarily focus on gestures that are communicative, and it is unclear whether the selected gesture are actually intuitive to users. This paper investigates whether different people inherently use similar gestures to convey the same commands to robots, and how teaching of gestures when collecting demonstrations for training recognizers can improve resulting accuracy. We conducted this work in two stages. In Stage 1, we conducted an online user study (n=190) to investigate if people use similar gestures to communicate the same set of given commands to a robot when no guidance or training was given. Results revealed large variations in the gestures used among individuals With the absences of training. Training a gesture recognizer using this dataset resulted in an accuracy of around 20%. In response to this, Stage 2 involved proposing a common set of gestures for the commands. We taught these gestures through demonstrations and collected ~ 7500 videos of gestures from study participants to train another gesture recognition model. Initial results showed improved accuracy but a number of gestures had high confusion rates. Refining our gesture set and recognition model by removing those gestures, We achieved an final accuracy of 84.1 ± 2.4%. We integrated the gesture recognition model into the ROS framework and demonstrated a use case, where a person commands a robot to perform a pick and place task using the gesture set.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122993475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900677
Avinash Saravanan, Maria Tsfasman, Mark Antonius Neerincx, Catharine Oertel
In ongoing and consecutive conversations with persons, a social robot has to determine which aspects to remember and how to address them in the conversation. In the health domain, important aspects concern the health-related goals, the experienced progress (expressed sentiment) and the ongoing motivation to pursue them. Despite the progress in speech technology and conversational agents, most social robots lack a memory for such experience sharing. This paper presents the design and evaluation of a conversational memory for personalized behavior change support conversations on healthy nutrition via memory-based motivational rephrasing. The main hypothesis is that referring to previous sessions improves motivation and goal attainment, particularly when references vary. In addition, the paper explores how far motivational rephrasing affects user’s perception of the conversational agent (the virtual Furhat). An experiment with 79 participants was conducted via Zoom, consisting of three conversation sessions. The results showed a significant increase in participants’ change in motivation when multiple references to previous sessions were provided.
{"title":"Giving Social Robots a Conversational Memory for Motivational Experience Sharing","authors":"Avinash Saravanan, Maria Tsfasman, Mark Antonius Neerincx, Catharine Oertel","doi":"10.1109/RO-MAN53752.2022.9900677","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900677","url":null,"abstract":"In ongoing and consecutive conversations with persons, a social robot has to determine which aspects to remember and how to address them in the conversation. In the health domain, important aspects concern the health-related goals, the experienced progress (expressed sentiment) and the ongoing motivation to pursue them. Despite the progress in speech technology and conversational agents, most social robots lack a memory for such experience sharing. This paper presents the design and evaluation of a conversational memory for personalized behavior change support conversations on healthy nutrition via memory-based motivational rephrasing. The main hypothesis is that referring to previous sessions improves motivation and goal attainment, particularly when references vary. In addition, the paper explores how far motivational rephrasing affects user’s perception of the conversational agent (the virtual Furhat). An experiment with 79 participants was conducted via Zoom, consisting of three conversation sessions. The results showed a significant increase in participants’ change in motivation when multiple references to previous sessions were provided.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126294296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900849
Sophie Husemann, Jan Pöppel, S. Kopp
Theory of Mind is the process of ascribing mental states to other individuals we interact with. It is used for sense-making of the observed actions and prediction of future actions. Previous studies revealed that humans mentalize about artificial agents, but it is not entirely clear how and to what extent. At the same time mentalizing about humans is often influenced by biases such as an egocentric bias. We present a study investigating differences in participants’ ToM and their susceptibility to an egocentric bias when observing humans vs robots. The participants observed an autonomous robot, a controlled robot, and a human in the same scenarios. The agents had to find an object in a laboratory. While watching the agents, participants had to make several action predictions as an implicit measure of ToM, potentially revealing an egocentric bias. At the end, questions about the agent’s responsibility, awareness and strategy were asked. The results indicate that while participants generally performed ToM for all types of agents, both the scenario as well as the agent type appear to influence participants’ likelihood of exhibiting an egocentric bias.
{"title":"Differences and Biases in Mentalizing About Humans and Robots","authors":"Sophie Husemann, Jan Pöppel, S. Kopp","doi":"10.1109/RO-MAN53752.2022.9900849","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900849","url":null,"abstract":"Theory of Mind is the process of ascribing mental states to other individuals we interact with. It is used for sense-making of the observed actions and prediction of future actions. Previous studies revealed that humans mentalize about artificial agents, but it is not entirely clear how and to what extent. At the same time mentalizing about humans is often influenced by biases such as an egocentric bias. We present a study investigating differences in participants’ ToM and their susceptibility to an egocentric bias when observing humans vs robots. The participants observed an autonomous robot, a controlled robot, and a human in the same scenarios. The agents had to find an object in a laboratory. While watching the agents, participants had to make several action predictions as an implicit measure of ToM, potentially revealing an egocentric bias. At the end, questions about the agent’s responsibility, awareness and strategy were asked. The results indicate that while participants generally performed ToM for all types of agents, both the scenario as well as the agent type appear to influence participants’ likelihood of exhibiting an egocentric bias.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121077155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900718
Jauwairia Nasir, Pierre Oppliger, Barbara Bruno, P. Dillenbourg
Wizard of Oz, a very commonly employed technique in human-robot interaction, faces the criticism of being deceptive as the humans interacting with the robot are told, if at all, only at the end of their interaction that there was in fact a human behind the robot. What if the robot reveals the wizard behind itself very early in the interaction? We built a deep wizard of Oz setup to allow for a robot to play together with a human against a computer AI in the context of Connect 4 game. This cooperative game interaction against a common opponent is then followed by a conversation between the human and the robot. We conducted an exploratory user study with 29 adults with three conditions where the robot reveals the wizard, lies about the wizard, and does not say anything, respectively. We also split the data based on how the participants perceive the robot in terms of autonomy. Using different metrics, we evaluate how the users interact with and perceive the robot in both the experimental and perceived conditions. We find that while there is indeed a significant difference in the participants willingness to follow robots suggestions between the experimental conditions as well as in the effort they put to prove themselves as humans (reverse Turing test), there isn’t any significant difference in their robot perception. Additionally, how humans perceive whether the robot is tele-operated or autonomous seems to be indifferent to the robot revealing its identity, i.e., the pre-conceived notions may be uninfluenced even if the robot explicitly states otherwise. Lastly, interestingly in the perception based conditions, absence of statistical significance may suggest that, in certain contexts, wizard of oz may not require hiding the wizard after all.
《绿野仙踪》(Wizard of Oz)是人机交互中非常常用的一种技术,它面临着欺骗性的批评,因为人类与机器人交互时,即使有,也只是在交互结束时才被告知机器人背后实际上有一个人。如果机器人在交互过程中很早就揭示了自己背后的向导会怎样?我们在《绿野仙踪》中创建了一个深度设置,允许机器人与人类一起对抗电脑AI。这种对抗共同对手的合作游戏互动之后是人类和机器人之间的对话。我们对29名成年人进行了一项探索性的用户研究,在三种情况下,机器人分别透露了向导,对向导撒谎,什么也没说。我们还根据参与者如何感知机器人的自主性来分割数据。使用不同的指标,我们评估用户在实验和感知条件下如何与机器人交互和感知机器人。我们发现,虽然在实验条件下,参与者遵循机器人建议的意愿以及他们证明自己是人类(反向图灵测试)的努力确实存在显著差异,但他们对机器人的感知没有显著差异。此外,人类如何感知机器人是远程操作的还是自主的,似乎与机器人揭示其身份无关,也就是说,即使机器人明确说明了其他情况,先入为主的观念也可能不受影响。最后,有趣的是,在基于感知的条件下,缺乏统计显著性可能表明,在某些情况下,《绿野仙踪》可能根本不需要隐藏向导。
{"title":"Questioning Wizard of Oz: Effects of Revealing the Wizard behind the Robot","authors":"Jauwairia Nasir, Pierre Oppliger, Barbara Bruno, P. Dillenbourg","doi":"10.1109/RO-MAN53752.2022.9900718","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900718","url":null,"abstract":"Wizard of Oz, a very commonly employed technique in human-robot interaction, faces the criticism of being deceptive as the humans interacting with the robot are told, if at all, only at the end of their interaction that there was in fact a human behind the robot. What if the robot reveals the wizard behind itself very early in the interaction? We built a deep wizard of Oz setup to allow for a robot to play together with a human against a computer AI in the context of Connect 4 game. This cooperative game interaction against a common opponent is then followed by a conversation between the human and the robot. We conducted an exploratory user study with 29 adults with three conditions where the robot reveals the wizard, lies about the wizard, and does not say anything, respectively. We also split the data based on how the participants perceive the robot in terms of autonomy. Using different metrics, we evaluate how the users interact with and perceive the robot in both the experimental and perceived conditions. We find that while there is indeed a significant difference in the participants willingness to follow robots suggestions between the experimental conditions as well as in the effort they put to prove themselves as humans (reverse Turing test), there isn’t any significant difference in their robot perception. Additionally, how humans perceive whether the robot is tele-operated or autonomous seems to be indifferent to the robot revealing its identity, i.e., the pre-conceived notions may be uninfluenced even if the robot explicitly states otherwise. Lastly, interestingly in the perception based conditions, absence of statistical significance may suggest that, in certain contexts, wizard of oz may not require hiding the wizard after all.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121004888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900569
L. Elloumi, Marianne Bossema, S. M. D. Droog, Matthijs H. J. Smakman, S. V. Ginkel, M. Ligthart, K. Hoogland, K. Hindriks, S. B. Allouch
Social robots have been introduced in different fields such as retail, health care and education. Primary education in the Netherlands (and elsewhere) recently faced new challenges because of the COVID-19 pandemic, lockdowns and quarantines including students falling behind and teachers burdened with high workloads. Together with two Dutch municipalities and nine primary schools we are exploring the long-term use of social robots to study how social robots might support teachers in primary education, with a focus on mathematics education. This paper presents an explorative study to define requirements for a social robot math tutor. Multiple focus groups were held with the two main stakeholders, namely teachers and students. During the focus groups the aim was 1) to understand the current situation of mathematics education in the upper primary school level, 2) to identify the problems that teachers and students encounter in mathematics education, and 3) to identify opportunities for deploying a social robot math tutor in primary education from the perspective of both the teachers and students. The results inform the development of social robots and opportunities for pedagogical methods used in math teaching, child-robot interaction and potential support for teachers in the classroom.
{"title":"Exploring requirements and opportunities for social robots in primary mathematics education","authors":"L. Elloumi, Marianne Bossema, S. M. D. Droog, Matthijs H. J. Smakman, S. V. Ginkel, M. Ligthart, K. Hoogland, K. Hindriks, S. B. Allouch","doi":"10.1109/RO-MAN53752.2022.9900569","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900569","url":null,"abstract":"Social robots have been introduced in different fields such as retail, health care and education. Primary education in the Netherlands (and elsewhere) recently faced new challenges because of the COVID-19 pandemic, lockdowns and quarantines including students falling behind and teachers burdened with high workloads. Together with two Dutch municipalities and nine primary schools we are exploring the long-term use of social robots to study how social robots might support teachers in primary education, with a focus on mathematics education. This paper presents an explorative study to define requirements for a social robot math tutor. Multiple focus groups were held with the two main stakeholders, namely teachers and students. During the focus groups the aim was 1) to understand the current situation of mathematics education in the upper primary school level, 2) to identify the problems that teachers and students encounter in mathematics education, and 3) to identify opportunities for deploying a social robot math tutor in primary education from the perspective of both the teachers and students. The results inform the development of social robots and opportunities for pedagogical methods used in math teaching, child-robot interaction and potential support for teachers in the classroom.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116691182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900705
Stefano Bini, Antonio Greco, Alessia Saggese, M. Vento
The gesture is one of the most used forms of communication between humans; in recent years, given the new trend of factories to be adapted to Industry 4.0 paradigm, the scientific community has shown a growing interest towards the design of Gesture Recognition (GR) algorithms for Human-Robot Interaction (HRI) applications. Within this context, the GR algorithm needs to work in real time and over embedded platforms, with limited resources. Anyway, when looking at the available scientific literature, the aim of the different proposed neural networks (i.e. 2D and 3D) and of the different modalities used for feeding the network (i.e. RGB, RGB-D, optical flow) is typically the optimization of the accuracy, without strongly paying attention to the feasibility over low power hardware devices. Anyway, the analysis related to the trade-off between accuracy and computational burden (for both networks and modalities) becomes important so as to allow GR algorithms to work in industrial robotics applications. In this paper, we perform a wide benchmarking focusing not only on the accuracy but also on the computational burden, involving two different architectures (2D and 3D), with two different backbones (MobileNet, ResNeXt) and four types of input modalities (RGB, Depth, Optical Flow, Motion History Image) and their combinations.
{"title":"Benchmarking deep neural networks for gesture recognition on embedded devices *","authors":"Stefano Bini, Antonio Greco, Alessia Saggese, M. Vento","doi":"10.1109/RO-MAN53752.2022.9900705","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900705","url":null,"abstract":"The gesture is one of the most used forms of communication between humans; in recent years, given the new trend of factories to be adapted to Industry 4.0 paradigm, the scientific community has shown a growing interest towards the design of Gesture Recognition (GR) algorithms for Human-Robot Interaction (HRI) applications. Within this context, the GR algorithm needs to work in real time and over embedded platforms, with limited resources. Anyway, when looking at the available scientific literature, the aim of the different proposed neural networks (i.e. 2D and 3D) and of the different modalities used for feeding the network (i.e. RGB, RGB-D, optical flow) is typically the optimization of the accuracy, without strongly paying attention to the feasibility over low power hardware devices. Anyway, the analysis related to the trade-off between accuracy and computational burden (for both networks and modalities) becomes important so as to allow GR algorithms to work in industrial robotics applications. In this paper, we perform a wide benchmarking focusing not only on the accuracy but also on the computational burden, involving two different architectures (2D and 3D), with two different backbones (MobileNet, ResNeXt) and four types of input modalities (RGB, Depth, Optical Flow, Motion History Image) and their combinations.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116556059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900561
M. Rocha, Dagoberto Cruz-Sandoval, J. Favela, D. Muchaluat-Saade
Socially Assistive Robots (SARs) have successfully been used in various types of health therapies as non-pharmacological interventions. A SAR called EVA (Embodied Voice Assistant) is an open-source robotics platform intended to serve as a tool to support research in Human-Robot Interaction. The EVA robot was originally developed to assist in non-pharmacological interventions for people with Dementia and has more recently been applied for children with Autism Spectrum Disorder. EVA provides multimodal interactions such as verbal and non-verbal communication, facial recognition and light sensory effects. Although EVA uses low-cost hardware and open-source software, it is not always possible, or practical, to have a physical robot at hand, particularly during rapid iterative cycles of design and evaluation of therapies. Thus, our motivation to develop a simulator that allows testing the scripts of therapies to be enacted by the EVA robot. This work proposes EvaSIM (EVA Robot Simulator), a simulator that can interpret an EVA script code and emulate the multimodal interaction capabilities of the physical robot, such as Text-To-Speech, facial expression recognition, controlling light sensory effects, etc. Several EVA scripts were run using the simulator attesting that they have the same behaviour as the physical robot. EvaSIM can serve as a support tool in the teaching/learning process of the robot’s scripting language, enabling the training of technicians and therapists in script development and testing for the EVA robot.
{"title":"EvaSIM: a Software Simulator for the EVA Open-source Robotics Platform","authors":"M. Rocha, Dagoberto Cruz-Sandoval, J. Favela, D. Muchaluat-Saade","doi":"10.1109/RO-MAN53752.2022.9900561","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900561","url":null,"abstract":"Socially Assistive Robots (SARs) have successfully been used in various types of health therapies as non-pharmacological interventions. A SAR called EVA (Embodied Voice Assistant) is an open-source robotics platform intended to serve as a tool to support research in Human-Robot Interaction. The EVA robot was originally developed to assist in non-pharmacological interventions for people with Dementia and has more recently been applied for children with Autism Spectrum Disorder. EVA provides multimodal interactions such as verbal and non-verbal communication, facial recognition and light sensory effects. Although EVA uses low-cost hardware and open-source software, it is not always possible, or practical, to have a physical robot at hand, particularly during rapid iterative cycles of design and evaluation of therapies. Thus, our motivation to develop a simulator that allows testing the scripts of therapies to be enacted by the EVA robot. This work proposes EvaSIM (EVA Robot Simulator), a simulator that can interpret an EVA script code and emulate the multimodal interaction capabilities of the physical robot, such as Text-To-Speech, facial expression recognition, controlling light sensory effects, etc. Several EVA scripts were run using the simulator attesting that they have the same behaviour as the physical robot. EvaSIM can serve as a support tool in the teaching/learning process of the robot’s scripting language, enabling the training of technicians and therapists in script development and testing for the EVA robot.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122594430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900829
K. Halder, M. F. Orlando, R. S. Anand
In Minimal Invasive Surgical procedures, flexible bevel tip needles are widely used for percutaneous interventions due to the advantage of enhancing the target reaching accuracy. Here, the target reaching accuracy suffers due to tissue in-homogeneity, deformation in tissue domain and improper steering techniques. The main objective of the percutaneous interventional procedures is ensuring patient safety and reaching desired target position accurately. Several researchers have al-ready developed many approaches to control the needle steering for precise target reaching. To overcome complex approaches in existing controllers, we have proposed a fuzzy based controller to regulate the needle in a specified plane. Our designed method involves the needle non-holonomic constraints based kinematics inside tissue domain and Lyapunov analysis based fuzzy rule base for fuzzy inference system which ensures the closed loop stability of needling system for percutaneous interventional procedures. We have also validated our designed control scheme through extensive simulations and experimentation in biological tissue.
{"title":"Fuzzy Based Control of a Flexible Bevel-Tip Needle for Percutaneous Interventions","authors":"K. Halder, M. F. Orlando, R. S. Anand","doi":"10.1109/RO-MAN53752.2022.9900829","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900829","url":null,"abstract":"In Minimal Invasive Surgical procedures, flexible bevel tip needles are widely used for percutaneous interventions due to the advantage of enhancing the target reaching accuracy. Here, the target reaching accuracy suffers due to tissue in-homogeneity, deformation in tissue domain and improper steering techniques. The main objective of the percutaneous interventional procedures is ensuring patient safety and reaching desired target position accurately. Several researchers have al-ready developed many approaches to control the needle steering for precise target reaching. To overcome complex approaches in existing controllers, we have proposed a fuzzy based controller to regulate the needle in a specified plane. Our designed method involves the needle non-holonomic constraints based kinematics inside tissue domain and Lyapunov analysis based fuzzy rule base for fuzzy inference system which ensures the closed loop stability of needling system for percutaneous interventional procedures. We have also validated our designed control scheme through extensive simulations and experimentation in biological tissue.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1185 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131444837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900721
Frank Regal, Christina Petlowany, Can Pehlivanturk, C. V. Sice, C. Suarez, Blake Anderson, M. Pryor
Augmented Reality (AR) provides a method to superimpose real-time information on the physical world. AR is well-suited for complex robotic systems to help users understand robot behavior, status, and intent. This paper presents an AR system, Augmented Robot Environment (AugRE), that combines ROS-based robotic systems with Microsoft HoloLens 2 AR headsets to form a scalable multi-agent human-robot teaming system for indoor and outdoor exploration. The system allows multiple users to simultaneously localize, supervise, and receive labeled images from robotic clients. An overview of AugRE and details of the novel system architecture that allows for large-scale human-robot teaming is presented below. Studies showcasing system performance with multiple robotic clients are presented. Results show that AugRE can scale to 50 robotic clients with minimal performance degradation, due in part to key components that leverage a recent advancement in robotic client-to-client communication called Robofleet. Finally we discuss new capabilities enabled by AugRE.
{"title":"AugRE: Augmented Robot Environment to Facilitate Human-Robot Teaming and Communication *","authors":"Frank Regal, Christina Petlowany, Can Pehlivanturk, C. V. Sice, C. Suarez, Blake Anderson, M. Pryor","doi":"10.1109/RO-MAN53752.2022.9900721","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900721","url":null,"abstract":"Augmented Reality (AR) provides a method to superimpose real-time information on the physical world. AR is well-suited for complex robotic systems to help users understand robot behavior, status, and intent. This paper presents an AR system, Augmented Robot Environment (AugRE), that combines ROS-based robotic systems with Microsoft HoloLens 2 AR headsets to form a scalable multi-agent human-robot teaming system for indoor and outdoor exploration. The system allows multiple users to simultaneously localize, supervise, and receive labeled images from robotic clients. An overview of AugRE and details of the novel system architecture that allows for large-scale human-robot teaming is presented below. Studies showcasing system performance with multiple robotic clients are presented. Results show that AugRE can scale to 50 robotic clients with minimal performance degradation, due in part to key components that leverage a recent advancement in robotic client-to-client communication called Robofleet. Finally we discuss new capabilities enabled by AugRE.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133192218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}