Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223503
K. Shibuya, Kento Kosuga, H. Fukuhara
This study aims to build a violin-playing robot that can automatically determine how to perform based on the information included in musical scores. In this paper, we discuss the design of the variation pattern for the tempo of every bar and the sound pressure of every musical note to produce sounds that can convey bright and dark impressions. First, we present the analytical results of a trained violinist’s performance, in which we found that the tempo of the bright timbre is faster than that of the dark timbre, and the bright performance includes several steep variations in the sound pressure pattern. We then propose a design method for the performance to express bright and dark timbres based on the analytical results. In the experiments, sounds were produced by our anthropomorphic violin-playing robot, which can vary the sound pressure by varying a wrist joint angle. The sounds produced by the robot were analyzed, and we confirmed that the patterns of the produced sound pressure for the bright performance are similar to those of the designed one. The sounds were also evaluated by ten subjects, and we found that they distinguished the bright performances from the dark ones when the sound pressure and tempo variations were included.
{"title":"Bright and Dark Timbre Expressions with Sound Pressure and Tempo Variations by Violin-playing Robot*","authors":"K. Shibuya, Kento Kosuga, H. Fukuhara","doi":"10.1109/RO-MAN47096.2020.9223503","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223503","url":null,"abstract":"This study aims to build a violin-playing robot that can automatically determine how to perform based on the information included in musical scores. In this paper, we discuss the design of the variation pattern for the tempo of every bar and the sound pressure of every musical note to produce sounds that can convey bright and dark impressions. First, we present the analytical results of a trained violinist’s performance, in which we found that the tempo of the bright timbre is faster than that of the dark timbre, and the bright performance includes several steep variations in the sound pressure pattern. We then propose a design method for the performance to express bright and dark timbres based on the analytical results. In the experiments, sounds were produced by our anthropomorphic violin-playing robot, which can vary the sound pressure by varying a wrist joint angle. The sounds produced by the robot were analyzed, and we confirmed that the patterns of the produced sound pressure for the bright performance are similar to those of the designed one. The sounds were also evaluated by ten subjects, and we found that they distinguished the bright performances from the dark ones when the sound pressure and tempo variations were included.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116726418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223590
T. Nomura, Shun Horii
To investigate influences of media literacy into experiences of and negative attitudes toward robots, an online survey was conducted in Japan (N = 500). The results suggested that the connections of robot experiences with media literacy and negative attitudes toward robots were weak, and both media literacy and robot experiences had negative effects on negative attitudes toward interaction with robots.
{"title":"Influences of Media Literacy and Experiences of Robots into Negative Attitudes toward Robots in Japan","authors":"T. Nomura, Shun Horii","doi":"10.1109/RO-MAN47096.2020.9223590","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223590","url":null,"abstract":"To investigate influences of media literacy into experiences of and negative attitudes toward robots, an online survey was conducted in Japan (N = 500). The results suggested that the connections of robot experiences with media literacy and negative attitudes toward robots were weak, and both media literacy and robot experiences had negative effects on negative attitudes toward interaction with robots.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125597815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223515
Emir Mobedi, Nicola Villa, Wansoo Kim, A. Ajoudani
The objective of this paper is to propose an adaptive impedance control framework to cope with uncertainties in vision and dynamics in robotic assembly tasks. The framework is composed of an adaptive controller, a vision system, and an interaction planner, which are all supervised by a finite state machine. In this framework, the target assembly object’s pose is detected through the vision module, which is then used for the planning of the robot trajectories. The adaptive impedance control module copes with the uncertainties of the vision and the interaction planner modules in alignment of the assembly parts (a peg and a hole in this work). Unlike the classical impedance controllers, the online adaptation rule regulates the level of robot compliance in constrained directions, acting on and responding to the external forces. This enables the implementation of a flexible and adaptive Remote Center of Compliance (RCC) system, using active control. We first evaluate the performance of the proposed adaptive controller in comparison to classical impedance control. Next, the overall performance of the integrated system is evaluated in a peg-in-hole setup, with different clearances and orientation mismatches.
{"title":"An Adaptive Control Approach to Robotic Assembly with Uncertainties in Vision and Dynamics","authors":"Emir Mobedi, Nicola Villa, Wansoo Kim, A. Ajoudani","doi":"10.1109/RO-MAN47096.2020.9223515","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223515","url":null,"abstract":"The objective of this paper is to propose an adaptive impedance control framework to cope with uncertainties in vision and dynamics in robotic assembly tasks. The framework is composed of an adaptive controller, a vision system, and an interaction planner, which are all supervised by a finite state machine. In this framework, the target assembly object’s pose is detected through the vision module, which is then used for the planning of the robot trajectories. The adaptive impedance control module copes with the uncertainties of the vision and the interaction planner modules in alignment of the assembly parts (a peg and a hole in this work). Unlike the classical impedance controllers, the online adaptation rule regulates the level of robot compliance in constrained directions, acting on and responding to the external forces. This enables the implementation of a flexible and adaptive Remote Center of Compliance (RCC) system, using active control. We first evaluate the performance of the proposed adaptive controller in comparison to classical impedance control. Next, the overall performance of the integrated system is evaluated in a peg-in-hole setup, with different clearances and orientation mismatches.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"18 13","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113963196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223554
Kasra Mokhtari, Ali Ayub, Vidullan Surendran, Alan R. Wagner
Human drivers continually use social information to inform their decision making. We believe that incorporating this information into autonomous vehicle decision making would improve performance and importantly safety. This paper investigates how information in the form of pedestrian density can be used to identify the path being travelled and predict the number of pedestrians that the vehicle will encounter along that path in the future. We present experiments which use camera data captured while driving to evaluate our methods for path recognition and pedestrian density prediction. Our results show that we can identify the vehicle’s path using only pedestrian density at 92.4% accuracy and we can predict the number of pedestrians the vehicle will encounter with an accuracy of 70.45%. These results demonstrate that pedestrian density can serve as a source of information both perhaps to augment localization and for path risk prediction.
{"title":"Pedestrian Density Based Path Recognition and Risk Prediction for Autonomous Vehicles","authors":"Kasra Mokhtari, Ali Ayub, Vidullan Surendran, Alan R. Wagner","doi":"10.1109/RO-MAN47096.2020.9223554","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223554","url":null,"abstract":"Human drivers continually use social information to inform their decision making. We believe that incorporating this information into autonomous vehicle decision making would improve performance and importantly safety. This paper investigates how information in the form of pedestrian density can be used to identify the path being travelled and predict the number of pedestrians that the vehicle will encounter along that path in the future. We present experiments which use camera data captured while driving to evaluate our methods for path recognition and pedestrian density prediction. Our results show that we can identify the vehicle’s path using only pedestrian density at 92.4% accuracy and we can predict the number of pedestrians the vehicle will encounter with an accuracy of 70.45%. These results demonstrate that pedestrian density can serve as a source of information both perhaps to augment localization and for path risk prediction.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128083010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223606
S. Nakagawa, S. Yonekura, Hoshinori Kanazawa, Satoshi Nishikawa, Y. Kuniyoshi
It is essential for a monitoring system or a communication robot that interacts with an elderly person to accurately understand the user’s state and generate actions based on their condition. To ensure elderly welfare, quality of life (QOL) is a useful indicator for determining human physical suffering and mental and social activities in a comprehensive manner. In this study, we hypothesize that visual information is useful for extracting high-dimensional information on QOL from data collected by an agent while interacting with a person. We propose a QOL estimation method to integrate facial expressions, head fluctuations, and eye movements that can be extracted as visual information during the interaction with the communication agent. Our goal is to implement a multiple feature vectors learning estimator that incorporates convolutional 3D to learn spatiotemporal features. However, there is no database required for QOL estimation. Therefore, we implement a free communication agent and construct our database based on information collected through interpersonal experiments using the agent. To verify the proposed method, we focus on the estimation of the "mental health" QOL scale, which is the most difficult to estimate among the eight scales that compose QOL based on a previous study. We compare the four estimation accuracies: single-modal learning using each of the three features, i.e., facial expressions, head fluctuations, and eye movements and multiple feature vectors learning integrating all the three features. The experimental results show that multiple feature vectors learning has fewer estimation errors than all the other single-modal learning, which uses each feature separately. The experimental results for evaluating the difference between the estimated QOL score by the proposed method and the actual QOL score calculated by the conventional method also show that the average error is less than 10 points and, thus, the proposed system can estimate the QOL score. Thus, it is clear that the proposed new approach for estimating human conditions can improve the quality of human–robot interactions and personalized monitoring.
{"title":"Estimation of Mental Health Quality of Life using Visual Information during Interaction with a Communication Agent","authors":"S. Nakagawa, S. Yonekura, Hoshinori Kanazawa, Satoshi Nishikawa, Y. Kuniyoshi","doi":"10.1109/RO-MAN47096.2020.9223606","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223606","url":null,"abstract":"It is essential for a monitoring system or a communication robot that interacts with an elderly person to accurately understand the user’s state and generate actions based on their condition. To ensure elderly welfare, quality of life (QOL) is a useful indicator for determining human physical suffering and mental and social activities in a comprehensive manner. In this study, we hypothesize that visual information is useful for extracting high-dimensional information on QOL from data collected by an agent while interacting with a person. We propose a QOL estimation method to integrate facial expressions, head fluctuations, and eye movements that can be extracted as visual information during the interaction with the communication agent. Our goal is to implement a multiple feature vectors learning estimator that incorporates convolutional 3D to learn spatiotemporal features. However, there is no database required for QOL estimation. Therefore, we implement a free communication agent and construct our database based on information collected through interpersonal experiments using the agent. To verify the proposed method, we focus on the estimation of the \"mental health\" QOL scale, which is the most difficult to estimate among the eight scales that compose QOL based on a previous study. We compare the four estimation accuracies: single-modal learning using each of the three features, i.e., facial expressions, head fluctuations, and eye movements and multiple feature vectors learning integrating all the three features. The experimental results show that multiple feature vectors learning has fewer estimation errors than all the other single-modal learning, which uses each feature separately. The experimental results for evaluating the difference between the estimated QOL score by the proposed method and the actual QOL score calculated by the conventional method also show that the average error is less than 10 points and, thus, the proposed system can estimate the QOL score. Thus, it is clear that the proposed new approach for estimating human conditions can improve the quality of human–robot interactions and personalized monitoring.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133997706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223476
Toshinari Tanaka, Yuki Onozuka, M. Okui, Rie Nishihama, Taro Nakamura
Many haptic devices are currently being developed for human upper limbs. There are various types of force feedback devices for upper limbs, such as desktop and wearable type. However, the lower limbs absorb most of the force when standing or walking. Therefore, to render the sensation of force to the lower limbs, a device worn like a shoe to enable users to walk and have a wide range of movement and a device that provides a dropping sensation have been developed. However, both wide-area movement and a dropping sensation could not be combined in one device. Therefore, the authors propose the concept of a lower limb force feedback device that allows the user to wear it like a shoe and provides the sensation of dropping while enabling wide-area movement. In addition, as the first stage of device development, the authors evaluated the human sensation of dropping. Consequently, it was found that a relatively high sensation of dropping can be provided to a human even with an acceleration smaller than the gravitational acceleration in real space. Thus, the lower limb force feedback device to be developed in the future will allow the user to experience the sensation of dropping by using an acceleration smaller than the gravitational acceleration in real space.
{"title":"Influence of vertical acceleration for inducing sensation of dropping by lower limb force feedback device","authors":"Toshinari Tanaka, Yuki Onozuka, M. Okui, Rie Nishihama, Taro Nakamura","doi":"10.1109/RO-MAN47096.2020.9223476","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223476","url":null,"abstract":"Many haptic devices are currently being developed for human upper limbs. There are various types of force feedback devices for upper limbs, such as desktop and wearable type. However, the lower limbs absorb most of the force when standing or walking. Therefore, to render the sensation of force to the lower limbs, a device worn like a shoe to enable users to walk and have a wide range of movement and a device that provides a dropping sensation have been developed. However, both wide-area movement and a dropping sensation could not be combined in one device. Therefore, the authors propose the concept of a lower limb force feedback device that allows the user to wear it like a shoe and provides the sensation of dropping while enabling wide-area movement. In addition, as the first stage of device development, the authors evaluated the human sensation of dropping. Consequently, it was found that a relatively high sensation of dropping can be provided to a human even with an acceleration smaller than the gravitational acceleration in real space. Thus, the lower limb force feedback device to be developed in the future will allow the user to experience the sensation of dropping by using an acceleration smaller than the gravitational acceleration in real space.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132294196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223562
Ricardo Pereira, T. Barros, L. Garrote, Ana C. Lopes, U. Nunes
This paper presents a study in terms of accuracy and inference speed using RGB-D object detection and classification for mobile platform applications. The study is divided in three stages. In the first, eight state-of-the-art CNN-based object classifiers (AlexNet, VGG16-19, ResNet1850-101, DenseNet, and MobileNetV2) are used to compare the attained performances with the corresponding inference speeds in object classification tasks. The second stage consists in exploiting YOLOv3/YOLOv3-tiny networks to be used as Region of Interest generator method. In order to obtain a real-time object recognition pipeline, the final stage unifies the YOLOv3/YOLOv3-tiny with a CNN-based object classifier. The pipeline evaluates each object classifier with each Region of Interest generator method in terms of their accuracy and frame rate. For the evaluation of the proposed study under the conditions in which real robotic platforms navigate, a nonobject centric RGB-D dataset was recorded in Institute of Systems and Robotics facilities using a camera on-board the ISR-InterBot mobile platform. Experimental evaluations were also carried out in Washington and COCO datasets. Promising performances were achieved by the combination of YOLOv3tiny and ResNet18 networks on the embedded hardware Nvidia Jetson TX2.
{"title":"An Experimental Study of the Accuracy vs Inference Speed of RGB-D Object Recognition in Mobile Robotics","authors":"Ricardo Pereira, T. Barros, L. Garrote, Ana C. Lopes, U. Nunes","doi":"10.1109/RO-MAN47096.2020.9223562","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223562","url":null,"abstract":"This paper presents a study in terms of accuracy and inference speed using RGB-D object detection and classification for mobile platform applications. The study is divided in three stages. In the first, eight state-of-the-art CNN-based object classifiers (AlexNet, VGG16-19, ResNet1850-101, DenseNet, and MobileNetV2) are used to compare the attained performances with the corresponding inference speeds in object classification tasks. The second stage consists in exploiting YOLOv3/YOLOv3-tiny networks to be used as Region of Interest generator method. In order to obtain a real-time object recognition pipeline, the final stage unifies the YOLOv3/YOLOv3-tiny with a CNN-based object classifier. The pipeline evaluates each object classifier with each Region of Interest generator method in terms of their accuracy and frame rate. For the evaluation of the proposed study under the conditions in which real robotic platforms navigate, a nonobject centric RGB-D dataset was recorded in Institute of Systems and Robotics facilities using a camera on-board the ISR-InterBot mobile platform. Experimental evaluations were also carried out in Washington and COCO datasets. Promising performances were achieved by the combination of YOLOv3tiny and ResNet18 networks on the embedded hardware Nvidia Jetson TX2.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"331 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133767677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223444
Shogo Ikari, Y. Yoshikawa, H. Ishiguro
Deep discussions on topics without definite answers are important for society, but they are also challenging to facilitate. Recently, advances in the technology of using robots to facilitate discussions have been made. In this study, we developed a multiple-robot mediated discussion system (m-RMDS) to support discussions by having multiple robots assert their own points and lead a dialogue in a group of human participants. The robots involved the participants in a discussion through asking them for advice. We implemented the m-RMDS in discussions on difficult topics with no clear answers. A within-subject experiment with 16 groups (N=64) was conducted to evaluate the contribution of the m-RMDS. The participants completed a questionnaire about their discussion skills and their self-confidence. Then, they participated in two discussions, one facilitated by the m-RMDS and one that was unfacilitated. They evaluated and compared both experiences across multiple aspects. The participants with low confidence in conducting a discussion evaluated the discussion with m-RMDS as easier to move forward than the discussion without m-RMDS. Furthermore, they reported that they heard more of others' frank opinions during the facilitated discussion than during the unfacilitated one. In addition, regardless of their confidence level, the participants tended to respond that they would like to use the system again. We also review necessary improvements to the system and suggest future applications.
{"title":"Multiple-Robot Mediated Discussion System to support group discussion *","authors":"Shogo Ikari, Y. Yoshikawa, H. Ishiguro","doi":"10.1109/RO-MAN47096.2020.9223444","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223444","url":null,"abstract":"Deep discussions on topics without definite answers are important for society, but they are also challenging to facilitate. Recently, advances in the technology of using robots to facilitate discussions have been made. In this study, we developed a multiple-robot mediated discussion system (m-RMDS) to support discussions by having multiple robots assert their own points and lead a dialogue in a group of human participants. The robots involved the participants in a discussion through asking them for advice. We implemented the m-RMDS in discussions on difficult topics with no clear answers. A within-subject experiment with 16 groups (N=64) was conducted to evaluate the contribution of the m-RMDS. The participants completed a questionnaire about their discussion skills and their self-confidence. Then, they participated in two discussions, one facilitated by the m-RMDS and one that was unfacilitated. They evaluated and compared both experiences across multiple aspects. The participants with low confidence in conducting a discussion evaluated the discussion with m-RMDS as easier to move forward than the discussion without m-RMDS. Furthermore, they reported that they heard more of others' frank opinions during the facilitated discussion than during the unfacilitated one. In addition, regardless of their confidence level, the participants tended to respond that they would like to use the system again. We also review necessary improvements to the system and suggest future applications.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133415793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223345
Byeong June Moon, Sonya S. Kwak, Dahyun Kang, Hanbyeol Lee, Jong-suk Choi
Robotic things and social robots have been introduced into home, and they are expected to change the relationships between humans. Our study examines whether the introduction of robotic things or social robots, and the way that they are organized, can change the social relationship between family members. To observe this phenomenon, we designed a living lab experiment that simulated a home environment and recruited two families to participate. Families were asked to conduct home activities within two different types of Internet of Robotic Things(IoRT):1)internet of only robotic things(IoRT without mediator condition), and 2)internet of robotic things mediated by a social robot(IoRT with mediator condition). We recorded the interactions between the family members and the robotic things during the experiments and coded them into a dataset for social network analysis. The results revealed relationship differences between the two conditions. The introduction of IoRT without mediator motivated younger generation family members to share the burden of caring for other members, which was previously the duty of the mothers. However, this made the interaction network inefficient to do indirect interaction. On the contrary, introducing IoRT with mediator did not significantly change family relationships at the actor-level, and the mothers remained in charge of caring for other family members. However, IoRT with mediator made indirect interactions within the network more efficient. Furthermore, the role of the social robot mediator overlapped with that of the mothers. This shows that a social robot mediator can help the mothers care for other members of the family by operating and managing robotic things. Additionally, we discussed the implications for developing the IoRT for home.
{"title":"The Effects of Internet of Robotic Things on In-home Social Family Relationships","authors":"Byeong June Moon, Sonya S. Kwak, Dahyun Kang, Hanbyeol Lee, Jong-suk Choi","doi":"10.1109/RO-MAN47096.2020.9223345","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223345","url":null,"abstract":"Robotic things and social robots have been introduced into home, and they are expected to change the relationships between humans. Our study examines whether the introduction of robotic things or social robots, and the way that they are organized, can change the social relationship between family members. To observe this phenomenon, we designed a living lab experiment that simulated a home environment and recruited two families to participate. Families were asked to conduct home activities within two different types of Internet of Robotic Things(IoRT):1)internet of only robotic things(IoRT without mediator condition), and 2)internet of robotic things mediated by a social robot(IoRT with mediator condition). We recorded the interactions between the family members and the robotic things during the experiments and coded them into a dataset for social network analysis. The results revealed relationship differences between the two conditions. The introduction of IoRT without mediator motivated younger generation family members to share the burden of caring for other members, which was previously the duty of the mothers. However, this made the interaction network inefficient to do indirect interaction. On the contrary, introducing IoRT with mediator did not significantly change family relationships at the actor-level, and the mothers remained in charge of caring for other family members. However, IoRT with mediator made indirect interactions within the network more efficient. Furthermore, the role of the social robot mediator overlapped with that of the mothers. This shows that a social robot mediator can help the mothers care for other members of the family by operating and managing robotic things. Additionally, we discussed the implications for developing the IoRT for home.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117180683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223596
S. Gubbi, Raviteja Upadrashta, Shishir N. Y. Kolathaya, B. Amrutur
Robots that must operate in novel environments and collaborate with humans must be capable of acquiring new knowledge from human experts during operation. We propose teaching a robot novel objects it has not encountered before by pointing a hand at the new object of interest. An end-to-end neural network is used to attend to the novel object of interest indicated by the pointing hand and then to localize the object in new scenes. In order to attend to the novel object indicated by the pointing hand, we propose a spatial attention modulation mechanism that learns to focus on the highlighted object while ignoring the other objects in the scene. We show that a robot arm can manipulate novel objects that are highlighted by pointing a hand at them. We also evaluate the performance of the proposed architecture on a synthetic dataset constructed using emojis and on a real-world dataset of common objects.
{"title":"Teaching Robots Novel Objects by Pointing at Them","authors":"S. Gubbi, Raviteja Upadrashta, Shishir N. Y. Kolathaya, B. Amrutur","doi":"10.1109/RO-MAN47096.2020.9223596","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223596","url":null,"abstract":"Robots that must operate in novel environments and collaborate with humans must be capable of acquiring new knowledge from human experts during operation. We propose teaching a robot novel objects it has not encountered before by pointing a hand at the new object of interest. An end-to-end neural network is used to attend to the novel object of interest indicated by the pointing hand and then to localize the object in new scenes. In order to attend to the novel object indicated by the pointing hand, we propose a spatial attention modulation mechanism that learns to focus on the highlighted object while ignoring the other objects in the scene. We show that a robot arm can manipulate novel objects that are highlighted by pointing a hand at them. We also evaluate the performance of the proposed architecture on a synthetic dataset constructed using emojis and on a real-world dataset of common objects.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117026223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}