Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223502
Maximilian Diehl, Alexander Plopski, H. Kato, Karinne Ramirez-Amaro
Teaching robots new skills is considered as an important aspect of Human-Robot Collaboration (HRC). One challenge is that robots cannot communicate feedback in the same ways as humans do. This decreases the trust towards robots since it is difficult to judge, before the actual execution, if the robot has learned the task correctly. In this paper, we introduce an Augmented Reality (AR) based visualization tool that allows humans to verify the taught behavior before its execution. Our verification interface displays a virtual simulation embedded into the real environment, timely coupled with a semantic description of the current action. We developed three designs based on different interface/visualization-technology combinations to explore the potential benefits of enhanced simulations using AR over traditional simulation environments like RViz. We conducted a user study with 18 participants to assess the effectiveness of the proposed visualization tools regarding error detection capabilities. One of the advantages of the AR interfaces is that they provide more realistic feedback than traditional simulations with a lower cost of not having to model the entire environment.
{"title":"Augmented Reality interface to verify Robot Learning","authors":"Maximilian Diehl, Alexander Plopski, H. Kato, Karinne Ramirez-Amaro","doi":"10.1109/RO-MAN47096.2020.9223502","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223502","url":null,"abstract":"Teaching robots new skills is considered as an important aspect of Human-Robot Collaboration (HRC). One challenge is that robots cannot communicate feedback in the same ways as humans do. This decreases the trust towards robots since it is difficult to judge, before the actual execution, if the robot has learned the task correctly. In this paper, we introduce an Augmented Reality (AR) based visualization tool that allows humans to verify the taught behavior before its execution. Our verification interface displays a virtual simulation embedded into the real environment, timely coupled with a semantic description of the current action. We developed three designs based on different interface/visualization-technology combinations to explore the potential benefits of enhanced simulations using AR over traditional simulation environments like RViz. We conducted a user study with 18 participants to assess the effectiveness of the proposed visualization tools regarding error detection capabilities. One of the advantages of the AR interfaces is that they provide more realistic feedback than traditional simulations with a lower cost of not having to model the entire environment.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"33 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123806411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223573
Tudor B. Ionescu
This paper reports on a research project aimed at introducing a collaborative industrial robot into a makerspace (a public machine shop equipped with digital manufacturing technologies). Using an ethnographic approach, we observed how collaborations between researchers and non-experts are facilitated by makerspaces, how robot safety is being construed and negotiated by the actors involved in the project; and how knowledge about collaborative robot safety and applications is produced in a context previously unforeseen by the creators of the technology. The proposed analysis suggests that the sociotechnical configuration of the studied project resembles that of a trading zone, in which various types of knowledge and expertise are exchanged between the researchers from the interdisciplinary project team and makerspace members. As we shall argue, the trading zone model can be useful in the analysis and organization of participatory HRI research.
{"title":"Meet Your Personal Cobot, But Don’t Touch It Just Yet*","authors":"Tudor B. Ionescu","doi":"10.1109/RO-MAN47096.2020.9223573","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223573","url":null,"abstract":"This paper reports on a research project aimed at introducing a collaborative industrial robot into a makerspace (a public machine shop equipped with digital manufacturing technologies). Using an ethnographic approach, we observed how collaborations between researchers and non-experts are facilitated by makerspaces, how robot safety is being construed and negotiated by the actors involved in the project; and how knowledge about collaborative robot safety and applications is produced in a context previously unforeseen by the creators of the technology. The proposed analysis suggests that the sociotechnical configuration of the studied project resembles that of a trading zone, in which various types of knowledge and expertise are exchanged between the researchers from the interdisciplinary project team and makerspace members. As we shall argue, the trading zone model can be useful in the analysis and organization of participatory HRI research.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126062306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223493
Yongmin Cho, Frank L. Hammond
Recent surges in global construction spending are driving the need for safer, more efficient construction methods. One potential way of improving construction methods is to provide user interfaces that allow human operators to control machinery in a more intuitive and strategic manner. This paper explores the use of motion scaling and haptic feedback to improve task completion speed and force control during construction-related teleoperated robotic manipulation tasks.In this study, we design a bench-top Teleoperated Motion Scaling Robotic Arm (TMSRA) platform that allows the human operator to control the motion-mapping rate between the master (haptic console) and slave (robotic excavator) devices, while also providing force feedback and virtual safety functions to help prevent excessive force application by the slave device. We experimentally evaluated the impact of motion scaling and force feedback on human users' ability to perform simulated construction tasks. Experimental results from simulated robotic excavation and demolition tasks show that the maximum force applied to fictive buried utilities was reduced by 77.67% and 76.36% respectively due to the force feedback and safety function. Experimental results from simulated payload pushing/sliding tasks demonstrate that the provision of user- controlled motion scaling increases task efficiency, reducing completion times by at least 31.41%, and as much as 47.76%.
{"title":"Improving Efficiency and Safety in Teleoperated Robotic Manipulators using Motion Scaling and Force Feedback","authors":"Yongmin Cho, Frank L. Hammond","doi":"10.1109/RO-MAN47096.2020.9223493","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223493","url":null,"abstract":"Recent surges in global construction spending are driving the need for safer, more efficient construction methods. One potential way of improving construction methods is to provide user interfaces that allow human operators to control machinery in a more intuitive and strategic manner. This paper explores the use of motion scaling and haptic feedback to improve task completion speed and force control during construction-related teleoperated robotic manipulation tasks.In this study, we design a bench-top Teleoperated Motion Scaling Robotic Arm (TMSRA) platform that allows the human operator to control the motion-mapping rate between the master (haptic console) and slave (robotic excavator) devices, while also providing force feedback and virtual safety functions to help prevent excessive force application by the slave device. We experimentally evaluated the impact of motion scaling and force feedback on human users' ability to perform simulated construction tasks. Experimental results from simulated robotic excavation and demolition tasks show that the maximum force applied to fictive buried utilities was reduced by 77.67% and 76.36% respectively due to the force feedback and safety function. Experimental results from simulated payload pushing/sliding tasks demonstrate that the provision of user- controlled motion scaling increases task efficiency, reducing completion times by at least 31.41%, and as much as 47.76%.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130021758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223611
Zhuoni Jie, H. Gunes
Tasting is an essential activity in our daily lives. Implementing social robots in the food and drink service industry requires the social robots to be able to understand customers’ nonverbal behaviours, including taste-liking. Little is known about whether people alter their behavioural responses related to taste-liking when interacting with a humanoid social robot. We conducted the first beverage tasting study where the facilitator is a human versus a humanoid social robot with priming versus non-priming instruction styles. We found that the facilitator type and facilitation style had no significant influence on cognitive taste-liking. However, in robot facilitator scenarios, people were more willing to follow the instruction and felt more comfortable when facilitated with priming. Our study provides new empirical findings and design implications for using humanoid social robots in the hospitality industry.
{"title":"Investigating Taste-liking with a Humanoid Robot Facilitator","authors":"Zhuoni Jie, H. Gunes","doi":"10.1109/RO-MAN47096.2020.9223611","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223611","url":null,"abstract":"Tasting is an essential activity in our daily lives. Implementing social robots in the food and drink service industry requires the social robots to be able to understand customers’ nonverbal behaviours, including taste-liking. Little is known about whether people alter their behavioural responses related to taste-liking when interacting with a humanoid social robot. We conducted the first beverage tasting study where the facilitator is a human versus a humanoid social robot with priming versus non-priming instruction styles. We found that the facilitator type and facilitation style had no significant influence on cognitive taste-liking. However, in robot facilitator scenarios, people were more willing to follow the instruction and felt more comfortable when facilitated with priming. Our study provides new empirical findings and design implications for using humanoid social robots in the hospitality industry.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129322947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223539
Hasan Kivrak, Pinar Uluer, Hatice Kose, E. Gümüslü, D. Erol, Furkan Çakmak, S. Yavuz
The aim of this work is to create a social navigation system for an affective robot that acts as an assistant in the audiology department of hospitals for children with hearing impairments. Compared to traditional navigation systems, this system differentiates between objects and human beings and optimizes several parameters to keep at a social distance during motion when faced with humans not to interfere with their personal zones. For this purpose, social robot motion planning algorithms are employed to generate human-friendly paths that maintain humans’ safety and comfort during the robot’s navigation. This paper evaluates this system compared to traditional navigation, based on the surveys and physiological data of the adult participants in a preliminary study before using the system with children. Although the self-report questionnaires do not show any significant difference between navigation profiles of the robot, analysis of the physiological data may be interpreted that, the participants felt comfortable and less threatened in social navigation case.
{"title":"Physiological Data-Based Evaluation of a Social Robot Navigation System","authors":"Hasan Kivrak, Pinar Uluer, Hatice Kose, E. Gümüslü, D. Erol, Furkan Çakmak, S. Yavuz","doi":"10.1109/RO-MAN47096.2020.9223539","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223539","url":null,"abstract":"The aim of this work is to create a social navigation system for an affective robot that acts as an assistant in the audiology department of hospitals for children with hearing impairments. Compared to traditional navigation systems, this system differentiates between objects and human beings and optimizes several parameters to keep at a social distance during motion when faced with humans not to interfere with their personal zones. For this purpose, social robot motion planning algorithms are employed to generate human-friendly paths that maintain humans’ safety and comfort during the robot’s navigation. This paper evaluates this system compared to traditional navigation, based on the surveys and physiological data of the adult participants in a preliminary study before using the system with children. Although the self-report questionnaires do not show any significant difference between navigation profiles of the robot, analysis of the physiological data may be interpreted that, the participants felt comfortable and less threatened in social navigation case.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"9 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129924896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223582
Matthijs H. J. Smakman, J. Berket, E. Konijn
Social robots are increasingly studied and applied in the educational domain. Although they hold great potential for education, they also bring new moral challenges. In this study, we explored the moral considerations related to social robots from the perspective of Dutch educational policymakers by first identifying opportunities and concerns and then mapping them onto (moral) values from the literature. To explore their moral considerations, we conducted focus group sessions with Dutch Educational Policymakers (N = 20). Considerations varied from the potential to lower the workload of teachers, to concerns related to the increased influence of commercial enterprises on the educational system. In total, the considerations of the policymakers related to 15 theoretical values. We identified the moral considerations of educational policymakers to gain a better understanding of a governmental attitude towards the use of social robots. This helps to create the necessary moral guidelines towards a responsible implementation of social robots in education.
{"title":"The Impact of Social Robots in Education: Moral Considerations of Dutch Educational Policymakers","authors":"Matthijs H. J. Smakman, J. Berket, E. Konijn","doi":"10.1109/RO-MAN47096.2020.9223582","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223582","url":null,"abstract":"Social robots are increasingly studied and applied in the educational domain. Although they hold great potential for education, they also bring new moral challenges. In this study, we explored the moral considerations related to social robots from the perspective of Dutch educational policymakers by first identifying opportunities and concerns and then mapping them onto (moral) values from the literature. To explore their moral considerations, we conducted focus group sessions with Dutch Educational Policymakers (N = 20). Considerations varied from the potential to lower the workload of teachers, to concerns related to the increased influence of commercial enterprises on the educational system. In total, the considerations of the policymakers related to 15 theoretical values. We identified the moral considerations of educational policymakers to gain a better understanding of a governmental attitude towards the use of social robots. This helps to create the necessary moral guidelines towards a responsible implementation of social robots in education.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122568304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223534
Pinar Uluer, Hatice Kose, B. Oz, Turgut Can Aydinalev, D. Erol
The motivation of this work is to develop an affective robot companion for audiology rehabilitation and to test the system with the deaf or hard of hearing children. Two robot modules are developed which are the multimodal "stress/emotion/motivation" recognition module for the robot to "understand" how the children feel, and behaviour and feedback module of the robot to show the children how the robot "feels". In this study we only focus on the behaviour and feedback module of the robot. The selected affective/affirmative behaviours are tested by means of tablet games and employed on the robot during an audiology test, as a feedback mechanism. Facial data are used together with the surveys to evaluate the children’s perception of the robot and the behaviour set.
{"title":"Towards An Affective Robot Companion for Audiology Rehabilitation: How Does Pepper Feel Today?","authors":"Pinar Uluer, Hatice Kose, B. Oz, Turgut Can Aydinalev, D. Erol","doi":"10.1109/RO-MAN47096.2020.9223534","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223534","url":null,"abstract":"The motivation of this work is to develop an affective robot companion for audiology rehabilitation and to test the system with the deaf or hard of hearing children. Two robot modules are developed which are the multimodal \"stress/emotion/motivation\" recognition module for the robot to \"understand\" how the children feel, and behaviour and feedback module of the robot to show the children how the robot \"feels\". In this study we only focus on the behaviour and feedback module of the robot. The selected affective/affirmative behaviours are tested by means of tablet games and employed on the robot during an audiology test, as a feedback mechanism. Facial data are used together with the surveys to evaluate the children’s perception of the robot and the behaviour set.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125339827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223476
Toshinari Tanaka, Yuki Onozuka, M. Okui, Rie Nishihama, Taro Nakamura
Many haptic devices are currently being developed for human upper limbs. There are various types of force feedback devices for upper limbs, such as desktop and wearable type. However, the lower limbs absorb most of the force when standing or walking. Therefore, to render the sensation of force to the lower limbs, a device worn like a shoe to enable users to walk and have a wide range of movement and a device that provides a dropping sensation have been developed. However, both wide-area movement and a dropping sensation could not be combined in one device. Therefore, the authors propose the concept of a lower limb force feedback device that allows the user to wear it like a shoe and provides the sensation of dropping while enabling wide-area movement. In addition, as the first stage of device development, the authors evaluated the human sensation of dropping. Consequently, it was found that a relatively high sensation of dropping can be provided to a human even with an acceleration smaller than the gravitational acceleration in real space. Thus, the lower limb force feedback device to be developed in the future will allow the user to experience the sensation of dropping by using an acceleration smaller than the gravitational acceleration in real space.
{"title":"Influence of vertical acceleration for inducing sensation of dropping by lower limb force feedback device","authors":"Toshinari Tanaka, Yuki Onozuka, M. Okui, Rie Nishihama, Taro Nakamura","doi":"10.1109/RO-MAN47096.2020.9223476","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223476","url":null,"abstract":"Many haptic devices are currently being developed for human upper limbs. There are various types of force feedback devices for upper limbs, such as desktop and wearable type. However, the lower limbs absorb most of the force when standing or walking. Therefore, to render the sensation of force to the lower limbs, a device worn like a shoe to enable users to walk and have a wide range of movement and a device that provides a dropping sensation have been developed. However, both wide-area movement and a dropping sensation could not be combined in one device. Therefore, the authors propose the concept of a lower limb force feedback device that allows the user to wear it like a shoe and provides the sensation of dropping while enabling wide-area movement. In addition, as the first stage of device development, the authors evaluated the human sensation of dropping. Consequently, it was found that a relatively high sensation of dropping can be provided to a human even with an acceleration smaller than the gravitational acceleration in real space. Thus, the lower limb force feedback device to be developed in the future will allow the user to experience the sensation of dropping by using an acceleration smaller than the gravitational acceleration in real space.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132294196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223562
Ricardo Pereira, T. Barros, L. Garrote, Ana C. Lopes, U. Nunes
This paper presents a study in terms of accuracy and inference speed using RGB-D object detection and classification for mobile platform applications. The study is divided in three stages. In the first, eight state-of-the-art CNN-based object classifiers (AlexNet, VGG16-19, ResNet1850-101, DenseNet, and MobileNetV2) are used to compare the attained performances with the corresponding inference speeds in object classification tasks. The second stage consists in exploiting YOLOv3/YOLOv3-tiny networks to be used as Region of Interest generator method. In order to obtain a real-time object recognition pipeline, the final stage unifies the YOLOv3/YOLOv3-tiny with a CNN-based object classifier. The pipeline evaluates each object classifier with each Region of Interest generator method in terms of their accuracy and frame rate. For the evaluation of the proposed study under the conditions in which real robotic platforms navigate, a nonobject centric RGB-D dataset was recorded in Institute of Systems and Robotics facilities using a camera on-board the ISR-InterBot mobile platform. Experimental evaluations were also carried out in Washington and COCO datasets. Promising performances were achieved by the combination of YOLOv3tiny and ResNet18 networks on the embedded hardware Nvidia Jetson TX2.
{"title":"An Experimental Study of the Accuracy vs Inference Speed of RGB-D Object Recognition in Mobile Robotics","authors":"Ricardo Pereira, T. Barros, L. Garrote, Ana C. Lopes, U. Nunes","doi":"10.1109/RO-MAN47096.2020.9223562","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223562","url":null,"abstract":"This paper presents a study in terms of accuracy and inference speed using RGB-D object detection and classification for mobile platform applications. The study is divided in three stages. In the first, eight state-of-the-art CNN-based object classifiers (AlexNet, VGG16-19, ResNet1850-101, DenseNet, and MobileNetV2) are used to compare the attained performances with the corresponding inference speeds in object classification tasks. The second stage consists in exploiting YOLOv3/YOLOv3-tiny networks to be used as Region of Interest generator method. In order to obtain a real-time object recognition pipeline, the final stage unifies the YOLOv3/YOLOv3-tiny with a CNN-based object classifier. The pipeline evaluates each object classifier with each Region of Interest generator method in terms of their accuracy and frame rate. For the evaluation of the proposed study under the conditions in which real robotic platforms navigate, a nonobject centric RGB-D dataset was recorded in Institute of Systems and Robotics facilities using a camera on-board the ISR-InterBot mobile platform. Experimental evaluations were also carried out in Washington and COCO datasets. Promising performances were achieved by the combination of YOLOv3tiny and ResNet18 networks on the embedded hardware Nvidia Jetson TX2.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"331 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133767677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223596
S. Gubbi, Raviteja Upadrashta, Shishir N. Y. Kolathaya, B. Amrutur
Robots that must operate in novel environments and collaborate with humans must be capable of acquiring new knowledge from human experts during operation. We propose teaching a robot novel objects it has not encountered before by pointing a hand at the new object of interest. An end-to-end neural network is used to attend to the novel object of interest indicated by the pointing hand and then to localize the object in new scenes. In order to attend to the novel object indicated by the pointing hand, we propose a spatial attention modulation mechanism that learns to focus on the highlighted object while ignoring the other objects in the scene. We show that a robot arm can manipulate novel objects that are highlighted by pointing a hand at them. We also evaluate the performance of the proposed architecture on a synthetic dataset constructed using emojis and on a real-world dataset of common objects.
{"title":"Teaching Robots Novel Objects by Pointing at Them","authors":"S. Gubbi, Raviteja Upadrashta, Shishir N. Y. Kolathaya, B. Amrutur","doi":"10.1109/RO-MAN47096.2020.9223596","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223596","url":null,"abstract":"Robots that must operate in novel environments and collaborate with humans must be capable of acquiring new knowledge from human experts during operation. We propose teaching a robot novel objects it has not encountered before by pointing a hand at the new object of interest. An end-to-end neural network is used to attend to the novel object of interest indicated by the pointing hand and then to localize the object in new scenes. In order to attend to the novel object indicated by the pointing hand, we propose a spatial attention modulation mechanism that learns to focus on the highlighted object while ignoring the other objects in the scene. We show that a robot arm can manipulate novel objects that are highlighted by pointing a hand at them. We also evaluate the performance of the proposed architecture on a synthetic dataset constructed using emojis and on a real-world dataset of common objects.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117026223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}