Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515532
Nicole L. Robinson, Belinda Ward, D. Kavanagh
Social robots have been used to help people to make healthy changes, and one setting that could benefit from having more support services offered includes the higher education sector. This trial involved an initial test to explore how a social robot could help to deliver a low-intensity problem-solving session for students around study-related issues and challenges. A Pepper Humanoid Robot was deployed in a student centre to help students to build a problem-solving plan on a specific issue. In the trial, 72 students gave detailed responses to session questions for issues such as procrastination, life/study balance and study workload. Students reported good ratings for emotional reaction to the robot, perceived utility, intention to use the robot again, confidence to use the robot, perceived helpfulness from the robot, likelihood to use the robot for a new higher education issue, and to recommend the robot to a friend. Robot evaluation scores were correlated with scores on perceived helpfulness of the robot and confidence to try an idea in the next week. Students who reported positive robot evaluation scores were also more willing to use the session content and rate the content as helpful. One week later, most students reported that the robot session helped them to fix their chosen issue, and that they used at least one idea from the session. Overall, this study found that a session run by a social robot could provide support for a study-related issue or challenge, and that some students did receive benefit from the session content. Future studies could include enhancements and adaptations to session length, technical refinement and capacity to address new issues during the session.
{"title":"A Robot-Delivered Program for Low-Intensity Problem-Solving Therapy for Students in Higher Education","authors":"Nicole L. Robinson, Belinda Ward, D. Kavanagh","doi":"10.1109/RO-MAN50785.2021.9515532","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515532","url":null,"abstract":"Social robots have been used to help people to make healthy changes, and one setting that could benefit from having more support services offered includes the higher education sector. This trial involved an initial test to explore how a social robot could help to deliver a low-intensity problem-solving session for students around study-related issues and challenges. A Pepper Humanoid Robot was deployed in a student centre to help students to build a problem-solving plan on a specific issue. In the trial, 72 students gave detailed responses to session questions for issues such as procrastination, life/study balance and study workload. Students reported good ratings for emotional reaction to the robot, perceived utility, intention to use the robot again, confidence to use the robot, perceived helpfulness from the robot, likelihood to use the robot for a new higher education issue, and to recommend the robot to a friend. Robot evaluation scores were correlated with scores on perceived helpfulness of the robot and confidence to try an idea in the next week. Students who reported positive robot evaluation scores were also more willing to use the session content and rate the content as helpful. One week later, most students reported that the robot session helped them to fix their chosen issue, and that they used at least one idea from the session. Overall, this study found that a session run by a social robot could provide support for a study-related issue or challenge, and that some students did receive benefit from the session content. Future studies could include enhancements and adaptations to session length, technical refinement and capacity to address new issues during the session.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"15 1","pages":"945-950"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78481731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515438
C. D. Jong, J. Peter, R. Kühne, C. L. V. Straten, Àlex Barco
With social robots entering the consumer market, there is a growing need to study child-robot interaction in a domestic environment. Therefore, the aim of this study was to explore children’s beliefs that underlie their intended adoption or rejection of a social robot for use in their homes. Based on a content analysis of data from 87 children, we found that hedonic beliefs (i.e., the belief that having a robot at home is pleasurable) were the most mentioned beliefs for domestic adoption of a social robot. More specifically, companionship was an often-mentioned hedonic belief. Social beliefs were rarely mentioned. If children mentioned beliefs for rejecting the robot, they often referred to family members and family composition. The findings of this exploratory study thus suggest that children’s hedonic beliefs play a central role in their intended adoption of a social robot in a domestic environment.
{"title":"Exploring Children’s Beliefs for Adoption or Rejection of Domestic Social Robots*","authors":"C. D. Jong, J. Peter, R. Kühne, C. L. V. Straten, Àlex Barco","doi":"10.1109/RO-MAN50785.2021.9515438","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515438","url":null,"abstract":"With social robots entering the consumer market, there is a growing need to study child-robot interaction in a domestic environment. Therefore, the aim of this study was to explore children’s beliefs that underlie their intended adoption or rejection of a social robot for use in their homes. Based on a content analysis of data from 87 children, we found that hedonic beliefs (i.e., the belief that having a robot at home is pleasurable) were the most mentioned beliefs for domestic adoption of a social robot. More specifically, companionship was an often-mentioned hedonic belief. Social beliefs were rarely mentioned. If children mentioned beliefs for rejecting the robot, they often referred to family members and family composition. The findings of this exploratory study thus suggest that children’s hedonic beliefs play a central role in their intended adoption of a social robot in a domestic environment.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"88 1","pages":"598-603"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80266550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515402
Javier Laplaza, Albert Pumarola, F. Moreno-Noguer, A. Sanfeliu
This work proposes a human motion prediction model for handover operations. We use in this work, the different phases of the handover operation to improve the human motion predictions. Our attention deep learning based model takes into account the position of the robot’s End Effector (REE) and the phase in the handover operation to predict future human poses. Our model outputs a distribution of possible positions rather than one deterministic position, a key feature in order to allow robots to collaborate with humans. We provide results of the human upper body and the human right hand, also referred as Human End Effector (HEE).The attention deep learning based model has been trained and evaluated with a dataset created using human volunteers and an anthropomorphic robot, simulating handover operations where the robot is the giver and the human the receiver. For each operation, the human skeleton is obtained with an Intel RealSense D435i camera attached inside the robot’s head. The results shown a great improvement of the human’s right hand prediction and 3D body compared with other methods.
{"title":"Attention deep learning based model for predicting the 3D Human Body Pose using the Robot Human Handover Phases","authors":"Javier Laplaza, Albert Pumarola, F. Moreno-Noguer, A. Sanfeliu","doi":"10.1109/RO-MAN50785.2021.9515402","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515402","url":null,"abstract":"This work proposes a human motion prediction model for handover operations. We use in this work, the different phases of the handover operation to improve the human motion predictions. Our attention deep learning based model takes into account the position of the robot’s End Effector (REE) and the phase in the handover operation to predict future human poses. Our model outputs a distribution of possible positions rather than one deterministic position, a key feature in order to allow robots to collaborate with humans. We provide results of the human upper body and the human right hand, also referred as Human End Effector (HEE).The attention deep learning based model has been trained and evaluated with a dataset created using human volunteers and an anthropomorphic robot, simulating handover operations where the robot is the giver and the human the receiver. For each operation, the human skeleton is obtained with an Intel RealSense D435i camera attached inside the robot’s head. The results shown a great improvement of the human’s right hand prediction and 3D body compared with other methods.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"116 1","pages":"161-166"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80368843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515488
M. Khalili, Kevin Ta, J. Borisoff, H. V. D. Loos
Pushrim-activated power-assisted wheels (PAPAWs) are assistive technologies that provide on-demand assistance to wheelchair users. PAPAWs operate based on a collaborative control scheme and require an accurate interpretation of the user’s intent to provide effective propulsion assistance. This paper investigates a user-specific intention estimation framework for wheelchair users. We used Gaussian Mixture models (GMM) to identify implicit intentions from user-pushrim interactions (i.e., input torque to the pushrims). Six clusters emerged that were associated with different phases of a stroke pattern and the intention about the desired direction of motion. GMM predictions were used as "ground truth" labels for further intention estimation analysis. Next, Random Forest (RF) classifiers were trained to predict user intentions. The best optimal classifier had an overall prediction accuracy of 94.7%. Finally, a Bayesian filtering (BF) algorithm was used to extract sequential dependencies of the user-pushrim measurements. The BF algorithm improved sequences of intention predictions for some wheelchair maneuvers compared to the GMM and RF predictions. The proposed intention estimation pipeline is computationally efficient and was successfully tested and used for real-time prediction of wheelchair user’s intentions. This framework provides the foundation for the development of user-specific and adaptive PAPAW controllers.
{"title":"Offline and Real-Time Implementation of a Personalized Wheelchair User Intention Detection Pipeline: A Case Study*","authors":"M. Khalili, Kevin Ta, J. Borisoff, H. V. D. Loos","doi":"10.1109/RO-MAN50785.2021.9515488","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515488","url":null,"abstract":"Pushrim-activated power-assisted wheels (PAPAWs) are assistive technologies that provide on-demand assistance to wheelchair users. PAPAWs operate based on a collaborative control scheme and require an accurate interpretation of the user’s intent to provide effective propulsion assistance. This paper investigates a user-specific intention estimation framework for wheelchair users. We used Gaussian Mixture models (GMM) to identify implicit intentions from user-pushrim interactions (i.e., input torque to the pushrims). Six clusters emerged that were associated with different phases of a stroke pattern and the intention about the desired direction of motion. GMM predictions were used as \"ground truth\" labels for further intention estimation analysis. Next, Random Forest (RF) classifiers were trained to predict user intentions. The best optimal classifier had an overall prediction accuracy of 94.7%. Finally, a Bayesian filtering (BF) algorithm was used to extract sequential dependencies of the user-pushrim measurements. The BF algorithm improved sequences of intention predictions for some wheelchair maneuvers compared to the GMM and RF predictions. The proposed intention estimation pipeline is computationally efficient and was successfully tested and used for real-time prediction of wheelchair user’s intentions. This framework provides the foundation for the development of user-specific and adaptive PAPAW controllers.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"41 1","pages":"1210-1215"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77708065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515475
Lakshadeep Naik, Oskar Palinko, L. Bodenhagen, N. Krüger
In this paper, we present a method for proactive approaching of humans for human-robot cooperative tasks such as a robot serving beverages to people. The proposed method can deal robustly with the uncertainties in the robot’s perception while also ensuring socially acceptable behavior. We use multiple modalities in the form of the robot’s motion, body orientation, speech and gaze to proactively approach humans. Further, we present a behavior tree based control architecture to efficiently integrate these different modalities. The proposed method was successfully integrated and tested on a beverage serving robot. We present the findings of our experiments and discuss possible extensions to address limitations.
{"title":"Multi-modal Proactive Approaching of Humans for Human-Robot Cooperative Tasks","authors":"Lakshadeep Naik, Oskar Palinko, L. Bodenhagen, N. Krüger","doi":"10.1109/RO-MAN50785.2021.9515475","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515475","url":null,"abstract":"In this paper, we present a method for proactive approaching of humans for human-robot cooperative tasks such as a robot serving beverages to people. The proposed method can deal robustly with the uncertainties in the robot’s perception while also ensuring socially acceptable behavior. We use multiple modalities in the form of the robot’s motion, body orientation, speech and gaze to proactively approach humans. Further, we present a behavior tree based control architecture to efficiently integrate these different modalities. The proposed method was successfully integrated and tested on a beverage serving robot. We present the findings of our experiments and discuss possible extensions to address limitations.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"51 1","pages":"323-329"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91294242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515430
Björn Petrak, Gundula Sopper, Katharina Weitz, E. André
Several works highlight how robots can navigate in a socially-aware manner by respecting and avoiding people’s personal spaces. But how should the robot act when there is no way around a group of persons? In this work, we explore this question by comparing three different ways to cross two conversing people in a hallway environment. In an online study with 135 participants, users rated the robot’s behavior on several items such as "social adequacy" or how "disturbing" it was. The three versions differ in the type of contact intention, i.e., no contact, nonverbal contact, and a combination of nonverbal and verbal contact. The results show that, on the one hand, users expect social behavior from the robot, so that they can anticipate its behavior, but on the other hand, they want it to be as little disruptive as possible.
{"title":"Do You Mind if I Pass Through? Studying the Appropriate Robot Behavior when Traversing two Conversing People in a Hallway Setting*","authors":"Björn Petrak, Gundula Sopper, Katharina Weitz, E. André","doi":"10.1109/RO-MAN50785.2021.9515430","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515430","url":null,"abstract":"Several works highlight how robots can navigate in a socially-aware manner by respecting and avoiding people’s personal spaces. But how should the robot act when there is no way around a group of persons? In this work, we explore this question by comparing three different ways to cross two conversing people in a hallway environment. In an online study with 135 participants, users rated the robot’s behavior on several items such as \"social adequacy\" or how \"disturbing\" it was. The three versions differ in the type of contact intention, i.e., no contact, nonverbal contact, and a combination of nonverbal and verbal contact. The results show that, on the one hand, users expect social behavior from the robot, so that they can anticipate its behavior, but on the other hand, they want it to be as little disruptive as possible.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"42 1","pages":"369-375"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84503075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515461
Alexander Nguyen, Biyun Xie
There is an increasing interest in accurately predicting natural human arm motions for areas like human-robot interaction, wearable robots, and ergonomic simulations. This paper studies the problem of predicting natural fingertip and joint trajectories in human arm reaching movements. Compared to the widely-used minimum jerk model, the 5-parameter logistic model can represent natural fingertip trajectories more accurately. Based on 3520 human arm motions recorded by a motion capture system, regression learning is used to predict the five parameters representing the fingertip trajectory for a given target point. Then, the elbow swivel angle is predicted using regression learning to resolve the kinematic redundancy of the human arm at discrete fingertip positions. Finally, discrete joint angles are solved based on the predicted elbow swivel angles and then fitted to a continuous 5-parameter logistic function to obtain the joint trajectory. This method is verified using 48 test motions, and the results show that this method can generate accurate human arm motions.
{"title":"Human Arm Motion Prediction in Reaching Movements*","authors":"Alexander Nguyen, Biyun Xie","doi":"10.1109/RO-MAN50785.2021.9515461","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515461","url":null,"abstract":"There is an increasing interest in accurately predicting natural human arm motions for areas like human-robot interaction, wearable robots, and ergonomic simulations. This paper studies the problem of predicting natural fingertip and joint trajectories in human arm reaching movements. Compared to the widely-used minimum jerk model, the 5-parameter logistic model can represent natural fingertip trajectories more accurately. Based on 3520 human arm motions recorded by a motion capture system, regression learning is used to predict the five parameters representing the fingertip trajectory for a given target point. Then, the elbow swivel angle is predicted using regression learning to resolve the kinematic redundancy of the human arm at discrete fingertip positions. Finally, discrete joint angles are solved based on the predicted elbow swivel angles and then fitted to a continuous 5-parameter logistic function to obtain the joint trajectory. This method is verified using 48 test motions, and the results show that this method can generate accurate human arm motions.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"27 1","pages":"1117-1123"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78721814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Action recognition is an important component to improve autonomy of physical rehabilitation devices, such as wearable robotic exoskeletons. Existing human action recognition algorithms focus on adult applications rather than pediatric ones. In this paper, we introduce BabyNet, a light-weight (in terms of trainable parameters) network structure to recognize infant reaching action from off-body stationary cameras. We develop an annotated dataset that includes diverse reaches performed while in a sitting posture by different infants in unconstrained environments (e.g., in home settings, etc.). Our approach uses the spatial and temporal connection of annotated bounding boxes to interpret onset and offset of reaching, and to detect a complete reaching action. We evaluate the efficiency of our proposed approach and compare its performance against other learning-based network structures in terms of capability of capturing temporal inter-dependencies and accuracy of detection of reaching onset and offset. Results indicate our BabyNet can attain solid performance in terms of (average) testing accuracy that exceeds that of other larger networks, and can hence serve as a light-weight data-driven framework for video-based infant reaching action recognition.
{"title":"BabyNet: A Lightweight Network for Infant Reaching Action Recognition in Unconstrained Environments to Support Future Pediatric Rehabilitation Applications","authors":"Amel Dechemi, Vikarn Bhakri, Ipsita Sahin, Arjun Modi, Julya Mestas, Pamodya Peiris, Dannya Enriquez Barrundia, Elena Kokkoni, Konstantinos Karydis","doi":"10.1109/RO-MAN50785.2021.9515507","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515507","url":null,"abstract":"Action recognition is an important component to improve autonomy of physical rehabilitation devices, such as wearable robotic exoskeletons. Existing human action recognition algorithms focus on adult applications rather than pediatric ones. In this paper, we introduce BabyNet, a light-weight (in terms of trainable parameters) network structure to recognize infant reaching action from off-body stationary cameras. We develop an annotated dataset that includes diverse reaches performed while in a sitting posture by different infants in unconstrained environments (e.g., in home settings, etc.). Our approach uses the spatial and temporal connection of annotated bounding boxes to interpret onset and offset of reaching, and to detect a complete reaching action. We evaluate the efficiency of our proposed approach and compare its performance against other learning-based network structures in terms of capability of capturing temporal inter-dependencies and accuracy of detection of reaching onset and offset. Results indicate our BabyNet can attain solid performance in terms of (average) testing accuracy that exceeds that of other larger networks, and can hence serve as a light-weight data-driven framework for video-based infant reaching action recognition.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"13 1","pages":"461-467"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89726618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515484
A. Kollakidou, Lakshadeep Naik, Oskar Palinko, L. Bodenhagen
Robot navigation in environments shared with humans should take into account social structures and interactions. The identification of social groups has been a challenge for robotics as it encompasses a number of disciplines. We propose a hierarchical clustering method for grouping individuals into free standing conversational groups (FSCS), utilising their position and orientation. The proposed method is evaluated on the SALSA dataset with achieved F1 score of 0.94. The algorithm is also evaluated for scalability and implemented on a mobile robot attempting to detect social groups and engage in interaction.
{"title":"Enabling Robots to Adhere to Social Norms by Detecting F-Formations","authors":"A. Kollakidou, Lakshadeep Naik, Oskar Palinko, L. Bodenhagen","doi":"10.1109/RO-MAN50785.2021.9515484","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515484","url":null,"abstract":"Robot navigation in environments shared with humans should take into account social structures and interactions. The identification of social groups has been a challenge for robotics as it encompasses a number of disciplines. We propose a hierarchical clustering method for grouping individuals into free standing conversational groups (FSCS), utilising their position and orientation. The proposed method is evaluated on the SALSA dataset with achieved F1 score of 0.94. The algorithm is also evaluated for scalability and implemented on a mobile robot attempting to detect social groups and engage in interaction.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"5 1","pages":"110-116"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86537253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515352
Ilenia Cucciniello, S. SanGiovanni, Gianpaolo Maggi, Silvia Rossi
When modeling the social behavior of a robot, the simulation of a specific personality or different interaction style may affect the perception of the interaction itself and the acceptability of the robot. Different interaction styles may be simulated through the use of verbal and non-verbal features that may not be easily recognized by the user as intended by the designer. For this reason, this study aimed to evaluate how three different robot interaction styles (i.e., Friendly, Neutral, and Authoritarian) were perceived by humans in the context of a robot carrying out cognitive tests. The Self-Assessment Manikin (SAM) was proposed to measure the perceived Valence, Arousal, and Dominance. We expected that a Neutral behavior is characterized by low Arousal, a Friendly by high Valence, and an Authoritarian by high Dominance. Moreover, the perception of a Socially Assistive Robot’s behavior is closely linked to trust, which is a key component to the success of any care-provider/user relationship. Hence, a Trust Perception Scale was used to explore the effect of the interaction style on trust. The results confirmed our hypothesis and showed a significant difference between each value with the others. Furthermore, we expected to obtain a higher value of trust with the Authoritarian since the performance of the users who interacted with the Authoritarian was better than the others. However, this hypothesis was not confirmed by the results.
{"title":"Validation of Robot Interactive Behaviors Through Users Emotional Perception and Their Effects on Trust","authors":"Ilenia Cucciniello, S. SanGiovanni, Gianpaolo Maggi, Silvia Rossi","doi":"10.1109/RO-MAN50785.2021.9515352","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515352","url":null,"abstract":"When modeling the social behavior of a robot, the simulation of a specific personality or different interaction style may affect the perception of the interaction itself and the acceptability of the robot. Different interaction styles may be simulated through the use of verbal and non-verbal features that may not be easily recognized by the user as intended by the designer. For this reason, this study aimed to evaluate how three different robot interaction styles (i.e., Friendly, Neutral, and Authoritarian) were perceived by humans in the context of a robot carrying out cognitive tests. The Self-Assessment Manikin (SAM) was proposed to measure the perceived Valence, Arousal, and Dominance. We expected that a Neutral behavior is characterized by low Arousal, a Friendly by high Valence, and an Authoritarian by high Dominance. Moreover, the perception of a Socially Assistive Robot’s behavior is closely linked to trust, which is a key component to the success of any care-provider/user relationship. Hence, a Trust Perception Scale was used to explore the effect of the interaction style on trust. The results confirmed our hypothesis and showed a significant difference between each value with the others. Furthermore, we expected to obtain a higher value of trust with the Authoritarian since the performance of the users who interacted with the Authoritarian was better than the others. However, this hypothesis was not confirmed by the results.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"36 1","pages":"197-202"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86543922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}