Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900685
Antonio Galiza Cerdeira Gonzalez, W. Lo, I. Mizuuchi
During the COVID-19 pandemic, many research areas that require in person experiments with human volunteers have been impacted due to lockdowns and other activity-restricting policies. The field of robotics is no exception, and specially human-robot interaction research has been severely impacted. In order to circumvent the difficulty of gathering volunteers in person to interact with a robot, we have decided to build a novel crowdsourcing web platform for hosting our "Talk to Kotaro" experiment. The experiment consists of volunteers talking to a robot avatar and reacting to its semantic-free utterances. The developed web platform, which was built using the Python Flask framework, allows for such interactions while recording audio and video and other relevant data, which will be used for studying human impression estimation on gibberish speech. This paper describes not only the experiment and its preliminary results, but the developed platform itself; such tool is essential during pandemics and very useful for regular times, because it enables crowdsourcing data from all over the world.
{"title":"Talk to Kotaro: a web crowdsourcing study on the impact of phone and prosody choice for synthesized speech on human impression","authors":"Antonio Galiza Cerdeira Gonzalez, W. Lo, I. Mizuuchi","doi":"10.1109/RO-MAN53752.2022.9900685","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900685","url":null,"abstract":"During the COVID-19 pandemic, many research areas that require in person experiments with human volunteers have been impacted due to lockdowns and other activity-restricting policies. The field of robotics is no exception, and specially human-robot interaction research has been severely impacted. In order to circumvent the difficulty of gathering volunteers in person to interact with a robot, we have decided to build a novel crowdsourcing web platform for hosting our \"Talk to Kotaro\" experiment. The experiment consists of volunteers talking to a robot avatar and reacting to its semantic-free utterances. The developed web platform, which was built using the Python Flask framework, allows for such interactions while recording audio and video and other relevant data, which will be used for studying human impression estimation on gibberish speech. This paper describes not only the experiment and its preliminary results, but the developed platform itself; such tool is essential during pandemics and very useful for regular times, because it enables crowdsourcing data from all over the world.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115494709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900838
Carolin Straßmann, Inga Diehl
This paper examines how conversational agents (CAs) can communicate emotions non-verbally using light as communication modality. Therefore, we manipulated the CA Alexa to demonstrate emotions (joy and sorrow) using different light modalities (Echo Dot ring, a Hue lamp and the combination) with either a congruent verbal context (party or funeral) or no verbal context. In an online study 167 participants evaluated the perceived emotion of Alexa, their own emotional state as well as the perception of Alexa after watching a video with a user interacting with Alexa. Although the perceived emotions of Alexa were not affected by the experimental conditions, the results indicate that the perception of Alexa as well as the user’s emotion is affected by the displayed communication modality. As external light can be used to manipulate the users’ perception of CAs, the findings give relevant implications for the design of CAs.
{"title":"Alexa Feels Blue And so Do I? Conversational Agents Displaying Emotions via Light Modalities","authors":"Carolin Straßmann, Inga Diehl","doi":"10.1109/RO-MAN53752.2022.9900838","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900838","url":null,"abstract":"This paper examines how conversational agents (CAs) can communicate emotions non-verbally using light as communication modality. Therefore, we manipulated the CA Alexa to demonstrate emotions (joy and sorrow) using different light modalities (Echo Dot ring, a Hue lamp and the combination) with either a congruent verbal context (party or funeral) or no verbal context. In an online study 167 participants evaluated the perceived emotion of Alexa, their own emotional state as well as the perception of Alexa after watching a video with a user interacting with Alexa. Although the perceived emotions of Alexa were not affected by the experimental conditions, the results indicate that the perception of Alexa as well as the user’s emotion is affected by the displayed communication modality. As external light can be used to manipulate the users’ perception of CAs, the findings give relevant implications for the design of CAs.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"287 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120881108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900812
Taiken Shintani, C. Ishi, H. Ishiguro
In this study, we describe an improved version of our proposed model to generate gaze movements (eye and head movements) of a dialogue robot in multi-party dialogue situations, and investigated how the impressions change for models created by data of speakers with different personalities. For that purpose, we used a multimodal three-party dialogue data, and first analyzed the distributions of (1) the gaze target (towards dialogue partners or gaze aversion), (2) the gaze duration, and (3) the eyeball direction during gaze aversion. We then generated gaze behaviors in an android robot (Nikola) with the data of two people who were found to have distinctive personalities, and conducted subjective evaluation experiments. Results showed that a significant difference was found in the perceived personalities between the motions generated by the two models.
{"title":"Expression of Personality by Gaze Movements of an Android Robot in Multi-Party Dialogues*","authors":"Taiken Shintani, C. Ishi, H. Ishiguro","doi":"10.1109/RO-MAN53752.2022.9900812","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900812","url":null,"abstract":"In this study, we describe an improved version of our proposed model to generate gaze movements (eye and head movements) of a dialogue robot in multi-party dialogue situations, and investigated how the impressions change for models created by data of speakers with different personalities. For that purpose, we used a multimodal three-party dialogue data, and first analyzed the distributions of (1) the gaze target (towards dialogue partners or gaze aversion), (2) the gaze duration, and (3) the eyeball direction during gaze aversion. We then generated gaze behaviors in an android robot (Nikola) with the data of two people who were found to have distinctive personalities, and conducted subjective evaluation experiments. Results showed that a significant difference was found in the perceived personalities between the motions generated by the two models.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121244693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900796
Matthew Rueben, Eitan Rothberg, Matthew Tang, Sarah Inzerillo, Saurabh Kshirsagar, Maansi Manchanda, Ginger Dudley, Marlena R. Fraune, M. Matarić
People often hold inaccurate mental models of robots. When such misconceptions regard a robot’s perceptual capabilities, they can lead to issues with safety, privacy, and interaction efficiency. This work is the first attempt to model users’ beliefs about a robot’s perceptual capabilities and make plans to improve their accuracy—i.e., to perform belief repair. We designed a new domain called the Robot Olympics, implemented it as a web-based game platform for collecting data about users’ beliefs, and developed an approach to estimating and influencing users’ beliefs about a virtual robot in that domain. We then conducted a study that collected user behavior and belief data from 240 online participants who played the game. Results revealed shortcomings in modeling the participant’s interpretations of the robot’s actions, as well as the decision making process behind their own actions. The insights from this work provide recommendations for designing further studies and improving user models to support belief repair in human-robot interaction.
{"title":"The Robot Olympics: Estimating and Influencing Beliefs About a Robot’s Perceptual Capabilities","authors":"Matthew Rueben, Eitan Rothberg, Matthew Tang, Sarah Inzerillo, Saurabh Kshirsagar, Maansi Manchanda, Ginger Dudley, Marlena R. Fraune, M. Matarić","doi":"10.1109/RO-MAN53752.2022.9900796","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900796","url":null,"abstract":"People often hold inaccurate mental models of robots. When such misconceptions regard a robot’s perceptual capabilities, they can lead to issues with safety, privacy, and interaction efficiency. This work is the first attempt to model users’ beliefs about a robot’s perceptual capabilities and make plans to improve their accuracy—i.e., to perform belief repair. We designed a new domain called the Robot Olympics, implemented it as a web-based game platform for collecting data about users’ beliefs, and developed an approach to estimating and influencing users’ beliefs about a virtual robot in that domain. We then conducted a study that collected user behavior and belief data from 240 online participants who played the game. Results revealed shortcomings in modeling the participant’s interpretations of the robot’s actions, as well as the decision making process behind their own actions. The insights from this work provide recommendations for designing further studies and improving user models to support belief repair in human-robot interaction.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121277539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900823
Andrea Monguzzi, Mahmoud Badawi, A. Zanchettin, P. Rocco
In this work, we address two crucial issues that arise in the design of a human-robot collaborative station for the assembly of products: the optimal task allocation and the scheduling problem. We propose an offline method to solve in series the two mentioned issues, considering a static allocation and taking into account several features such as the minimization of postural discomfort, operation processing times, idle times and hence the total cycle time. Our methodology consists of a mixed approach that combines a capability-based method, where the agents' capabilities are tested against a list of predefined criteria, with optimization. In particular, we formulate a modified version of the Hungarian Algorithm to solve also unbalanced assignment problems, where the number of tasks is different from the number of agents. The scheduling policy is obtained by means of a Mixed Integer Linear Programming (MILP) formulation, with a multi-objective optimization. Moreover, the concepts of operation, assembly tree and precedence graph are formalized, since they represent the inputs to our method, together with the information on the workstation layout and on the selected kind of robot. Finally, the proposed solution is applied to a case study to define the optimal task allocation and scheduling for two different workstation layouts: the results are compared and the best layout is accordingly selected.
{"title":"A mixed capability-based and optimization methodology for human-robot task allocation and scheduling","authors":"Andrea Monguzzi, Mahmoud Badawi, A. Zanchettin, P. Rocco","doi":"10.1109/RO-MAN53752.2022.9900823","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900823","url":null,"abstract":"In this work, we address two crucial issues that arise in the design of a human-robot collaborative station for the assembly of products: the optimal task allocation and the scheduling problem. We propose an offline method to solve in series the two mentioned issues, considering a static allocation and taking into account several features such as the minimization of postural discomfort, operation processing times, idle times and hence the total cycle time. Our methodology consists of a mixed approach that combines a capability-based method, where the agents' capabilities are tested against a list of predefined criteria, with optimization. In particular, we formulate a modified version of the Hungarian Algorithm to solve also unbalanced assignment problems, where the number of tasks is different from the number of agents. The scheduling policy is obtained by means of a Mixed Integer Linear Programming (MILP) formulation, with a multi-objective optimization. Moreover, the concepts of operation, assembly tree and precedence graph are formalized, since they represent the inputs to our method, together with the information on the workstation layout and on the selected kind of robot. Finally, the proposed solution is applied to a case study to define the optimal task allocation and scheduling for two different workstation layouts: the results are compared and the best layout is accordingly selected.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127102357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900678
Brian J. Zhang, C. A. Sanchez, Naomi T. Fitter
Transformative robot sound can improve perceptions of robots, but its implementation will likely require more hardware and cost. Does the addition of transformative sound yield an increase in value to offset this cost? Using the van Westendorp Price Sensitivity Meter, a questionnaire from marketing research, n = 97 participants measured acceptable price points for a robot with (and without) transformative sound. Results showed similar perceptual improvements as past studies, as well as a significant increase in perceived value, when transformative sound was included. These increases in social and value perceptions of robots confirm the utility of adding transformative sound to robots. This work benefits the broader human-robot interaction research community by sharing more ways to understand and validate the incorporation of transformative robot sound and other robot features.
{"title":"Using the Price Sensitivity Meter to Measure the Value of Transformative Robot Sound","authors":"Brian J. Zhang, C. A. Sanchez, Naomi T. Fitter","doi":"10.1109/RO-MAN53752.2022.9900678","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900678","url":null,"abstract":"Transformative robot sound can improve perceptions of robots, but its implementation will likely require more hardware and cost. Does the addition of transformative sound yield an increase in value to offset this cost? Using the van Westendorp Price Sensitivity Meter, a questionnaire from marketing research, n = 97 participants measured acceptable price points for a robot with (and without) transformative sound. Results showed similar perceptual improvements as past studies, as well as a significant increase in perceived value, when transformative sound was included. These increases in social and value perceptions of robots confirm the utility of adding transformative sound to robots. This work benefits the broader human-robot interaction research community by sharing more ways to understand and validate the incorporation of transformative robot sound and other robot features.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125004537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900826
Martin Moder, J. Pauli
Dense crowds are challenging scenes for an autonomous mobile robot. Planning in such an interactive environment requires predicting uncertain human intentions and reactions to future robot actions. Concerning these capabilities, we propose a probabilistic forecasting model which factorizes the human motion uncertainty as follows: 1) A (conditioned) normalizing flow (CNF) estimates the densities of human goals. 2) The density of trajectories toward goals is predicted autoregressively (AR), where the density of individual social actions is inferred simultaneously for a dynamic number of humans. The underlying Gaussian AR framework is extended with our SocialSampling to counteract collisions during sampling. The model allows us to determine a crowd prediction conditional on a particular robot plan and a crowd prediction independent of it for the same goals. We demonstrate that the divergence between the two probabilistic predictions can be efficiently determined and we derive our Social Influence (SI) objective from it. Finally, a model-predictive policy for robot crowd navigation is proposed that minimizes the SI objective. Thus, the robot reflects its future movement in order not to disturb humans in their movement if possible. The experiments on real datasets show that the model achieves state-of-the-art accuracy in predicting pedestrian movements. Furthermore, our evaluations show that robot policy with our SI objective produces safe and proactive behaviors, such as taking evasive action at the right time to avoid conflicts.
{"title":"Proactive Robot Movements in a Crowd by Predicting and Considering the Social Influence","authors":"Martin Moder, J. Pauli","doi":"10.1109/RO-MAN53752.2022.9900826","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900826","url":null,"abstract":"Dense crowds are challenging scenes for an autonomous mobile robot. Planning in such an interactive environment requires predicting uncertain human intentions and reactions to future robot actions. Concerning these capabilities, we propose a probabilistic forecasting model which factorizes the human motion uncertainty as follows: 1) A (conditioned) normalizing flow (CNF) estimates the densities of human goals. 2) The density of trajectories toward goals is predicted autoregressively (AR), where the density of individual social actions is inferred simultaneously for a dynamic number of humans. The underlying Gaussian AR framework is extended with our SocialSampling to counteract collisions during sampling. The model allows us to determine a crowd prediction conditional on a particular robot plan and a crowd prediction independent of it for the same goals. We demonstrate that the divergence between the two probabilistic predictions can be efficiently determined and we derive our Social Influence (SI) objective from it. Finally, a model-predictive policy for robot crowd navigation is proposed that minimizes the SI objective. Thus, the robot reflects its future movement in order not to disturb humans in their movement if possible. The experiments on real datasets show that the model achieves state-of-the-art accuracy in predicting pedestrian movements. Furthermore, our evaluations show that robot policy with our SI objective produces safe and proactive behaviors, such as taking evasive action at the right time to avoid conflicts.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116428691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900655
Zachary Henkel, Kenna Baugus Henkel, Cindy L. Bethel
While Wizard-of-Oz (WOz) techniques are frequently used to supplement a machine’s abilities, extending this approach to human entities can increase experimental control in studies comparing evaluations of humans and machines in the same role. This article describes the design, implementation, and use of a WOz system for facilitating controlled verbal interactions between children and robot or human interviewers. A collaborative interface allows multiple remote wizards to combine participant responses with interaction-specific goals in order to direct a robot or human interviewer’s behavior in a consistent manner. While robot interviewers are controlled directly, human interviewers receive direction through a tablet device or via a projection system concealed from participants. In addition to the system’s technical design, we describe the division of responsibilities between wizards and insights from using the system across three extensive interview studies to facilitate a total of 217 interactions with children.
{"title":"Wizards in the Middle: An Approach to Comparing Humans and Robots*","authors":"Zachary Henkel, Kenna Baugus Henkel, Cindy L. Bethel","doi":"10.1109/RO-MAN53752.2022.9900655","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900655","url":null,"abstract":"While Wizard-of-Oz (WOz) techniques are frequently used to supplement a machine’s abilities, extending this approach to human entities can increase experimental control in studies comparing evaluations of humans and machines in the same role. This article describes the design, implementation, and use of a WOz system for facilitating controlled verbal interactions between children and robot or human interviewers. A collaborative interface allows multiple remote wizards to combine participant responses with interaction-specific goals in order to direct a robot or human interviewer’s behavior in a consistent manner. While robot interviewers are controlled directly, human interviewers receive direction through a tablet device or via a projection system concealed from participants. In addition to the system’s technical design, we describe the division of responsibilities between wizards and insights from using the system across three extensive interview studies to facilitate a total of 217 interactions with children.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128411244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900870
Annika Boos, Markus Zimmermann, Monika M. Zych, K. Bengler
Robots are increasingly involved in tasks that require them to navigate social spaces shared with humans. Following social norms is considered a key requirement for such robots to ensure their acceptance and long-term use. This paper focuses on delivery robots as these often encounter problems in their operational areas–in this case a busy university campus– when they find their way blocked by people and they cannot move on towards their goal destination. We explored automated cue triggering to resolve this situation autonomously without the help of remote operators. Eighty-three pedestrians participated in a real-world study using a delivery robot. Four different cues were tested for their perceived politeness and ambiguity. The four different cues differed in the presence or absence of an instruction and the presence or absence of a justification for the request to let the robot pass as well as source orientation within the justification, which was either internally (self-) directed or externally (user-) directed. The results reveal a complex picture. Overall, a positive effect of verbal instructions in comparison to staying mute on social attributions to the robot was found. Contrary to our expectations, there was no significant difference in politeness between the different requests. Participants’ willingness to let the robot pass was positively correlated with perceived politeness, and negatively correlated with ambiguity of the requests.
{"title":"Polite and Unambiguous Requests Facilitate Willingness to Help an Autonomous Delivery Robot and Favourable Social Attributions","authors":"Annika Boos, Markus Zimmermann, Monika M. Zych, K. Bengler","doi":"10.1109/RO-MAN53752.2022.9900870","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900870","url":null,"abstract":"Robots are increasingly involved in tasks that require them to navigate social spaces shared with humans. Following social norms is considered a key requirement for such robots to ensure their acceptance and long-term use. This paper focuses on delivery robots as these often encounter problems in their operational areas–in this case a busy university campus– when they find their way blocked by people and they cannot move on towards their goal destination. We explored automated cue triggering to resolve this situation autonomously without the help of remote operators. Eighty-three pedestrians participated in a real-world study using a delivery robot. Four different cues were tested for their perceived politeness and ambiguity. The four different cues differed in the presence or absence of an instruction and the presence or absence of a justification for the request to let the robot pass as well as source orientation within the justification, which was either internally (self-) directed or externally (user-) directed. The results reveal a complex picture. Overall, a positive effect of verbal instructions in comparison to staying mute on social attributions to the robot was found. Contrary to our expectations, there was no significant difference in politeness between the different requests. Participants’ willingness to let the robot pass was positively correlated with perceived politeness, and negatively correlated with ambiguity of the requests.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130321136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900732
Zican Wang, Fei Mei, Xiao Xu, E. Steinbach
This paper presents a novel quality assessment approach for the prediction of the subjective haptic experience in time-delayed teleoperation. With the rapid development of haptic technology in remote robot control and virtual reality, new control schemes and hardware systems are developed to provide high quality human-in-the-loop teleoperation service. Our subjective experiments indicate that the existing objective quality assessment metrics do not sufficiently correlate with the subjective haptic experience of the users. This gap requires expensive and time-consuming subjective experiments to be conducted. To avoid time-consuming experiments and provide a fast and accurate subjective experience prediction, we make an attempt to analyze and explain the mismatch between the subjective and objective haptic signal quality metrics. To this end, extensive subjective experiments and case studies have been conducted for teleoperation with time delay and haptic data reduction. Based on our experimental results, we propose a quality assessment approach that predicts the subjective quality of experience using multiple objective metrics. For the one-dimensional spring model, the Spearman’s rank-order, Kendall’s rank-order and Pearson’s Linearity correlation coefficient (SROCC, KLOCC and PLCC) between the predictions of our model and the results of subjective experiment show remarkable improvement on the correlation between subjective and objective quality assessment.
{"title":"Towards Subjective Experience Prediction for Time-Delayed Teleoperation with Haptic Data Reduction","authors":"Zican Wang, Fei Mei, Xiao Xu, E. Steinbach","doi":"10.1109/RO-MAN53752.2022.9900732","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900732","url":null,"abstract":"This paper presents a novel quality assessment approach for the prediction of the subjective haptic experience in time-delayed teleoperation. With the rapid development of haptic technology in remote robot control and virtual reality, new control schemes and hardware systems are developed to provide high quality human-in-the-loop teleoperation service. Our subjective experiments indicate that the existing objective quality assessment metrics do not sufficiently correlate with the subjective haptic experience of the users. This gap requires expensive and time-consuming subjective experiments to be conducted. To avoid time-consuming experiments and provide a fast and accurate subjective experience prediction, we make an attempt to analyze and explain the mismatch between the subjective and objective haptic signal quality metrics. To this end, extensive subjective experiments and case studies have been conducted for teleoperation with time delay and haptic data reduction. Based on our experimental results, we propose a quality assessment approach that predicts the subjective quality of experience using multiple objective metrics. For the one-dimensional spring model, the Spearman’s rank-order, Kendall’s rank-order and Pearson’s Linearity correlation coefficient (SROCC, KLOCC and PLCC) between the predictions of our model and the results of subjective experiment show remarkable improvement on the correlation between subjective and objective quality assessment.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121288679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}