Constructs intended to capture social attitudes and behaviour towards social robots are incredibly varied, with little overlap or consistency in how they may be related. In this study we conduct exploratory analyses between participants’ self-reported attitudes and behaviour towards a social robot. We designed an autonomous interaction where 102 participants interacted with a social robot (Pepper) in a hypothetical travel planning scenario, during which the robot displayed various multi-modal social behaviours. Several behavioural measures were embedded throughout the interaction, followed by a self-report questionnaire targeting participant’s social attitudes towards the robot (social trust, liking, rapport, competency trust, technology acceptance, mind perception, social presence, and social information processing). Several relationships were identified between participant’s behaviour and self-reported attitudes towards the robot. Implications for how to conceptualise and measure interactions with social robots are discussed.
{"title":"Does what users say match what they do? Comparing self-reported attitudes and behaviours towards a social robot","authors":"Rebecca Stower, Karen Tatarian, Damien Rudaz, Marine Chamoux, M. Chetouani, Arvid Kappas","doi":"10.1109/RO-MAN53752.2022.9900782","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900782","url":null,"abstract":"Constructs intended to capture social attitudes and behaviour towards social robots are incredibly varied, with little overlap or consistency in how they may be related. In this study we conduct exploratory analyses between participants’ self-reported attitudes and behaviour towards a social robot. We designed an autonomous interaction where 102 participants interacted with a social robot (Pepper) in a hypothetical travel planning scenario, during which the robot displayed various multi-modal social behaviours. Several behavioural measures were embedded throughout the interaction, followed by a self-report questionnaire targeting participant’s social attitudes towards the robot (social trust, liking, rapport, competency trust, technology acceptance, mind perception, social presence, and social information processing). Several relationships were identified between participant’s behaviour and self-reported attitudes towards the robot. Implications for how to conceptualise and measure interactions with social robots are discussed.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131343046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900852
Hazel M. Taylor, C. Jay, B. Lennox, A. Cangelosi, Louise Dennis
There is a growing interest in the use of robotics and AI in the nuclear industry, however it is important to ensure these systems are ethically grounded, trustworthy and safe. An emerging technique to address these concerns is the use of explainability. In this paper we present the results of an interview study with nuclear industry experts to explore the use of explainable intelligent systems within the field. We interviewed 16 participants with varying backgrounds of expertise, and presented two potential use cases for evaluation; a navigation scenario and a task scheduling scenario. Through an inductive thematic analysis we identified the aspects of a deployment that experts want to know from explainable systems and we outline how these associate with the folk conceptual theory of explanation, a framework in which people explain behaviours. We established that an intelligent system should explain its reasons for an action, its expectations of itself, changes in the environment that impact decision making, probabilities and the elements within them, safety implications and mitigation strategies, robot health and component failures during decision making in nuclear deployments. We determine that these factors could be explained with cause, reason, and enabling factor explanations.
{"title":"Should AI Systems in Nuclear Facilities Explain Decisions the Way Humans Do? An Interview Study","authors":"Hazel M. Taylor, C. Jay, B. Lennox, A. Cangelosi, Louise Dennis","doi":"10.1109/RO-MAN53752.2022.9900852","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900852","url":null,"abstract":"There is a growing interest in the use of robotics and AI in the nuclear industry, however it is important to ensure these systems are ethically grounded, trustworthy and safe. An emerging technique to address these concerns is the use of explainability. In this paper we present the results of an interview study with nuclear industry experts to explore the use of explainable intelligent systems within the field. We interviewed 16 participants with varying backgrounds of expertise, and presented two potential use cases for evaluation; a navigation scenario and a task scheduling scenario. Through an inductive thematic analysis we identified the aspects of a deployment that experts want to know from explainable systems and we outline how these associate with the folk conceptual theory of explanation, a framework in which people explain behaviours. We established that an intelligent system should explain its reasons for an action, its expectations of itself, changes in the environment that impact decision making, probabilities and the elements within them, safety implications and mitigation strategies, robot health and component failures during decision making in nuclear deployments. We determine that these factors could be explained with cause, reason, and enabling factor explanations.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121334054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900549
Alex Mazursky, Madeleine DeVoe, S. Sebo
Robot-initiated touch is a promising mode of expression that would allow robot caregivers to perform physical tasks (instrumental touch) and provide comfort (affective touch) in healthcare settings. To understand the factors that shape how people respond to touch from a robotic caregiver, we conducted a crowdsourced study (N=163) examining how robot-initiated touch (present or absent), the robot’s intent (instrumental or affective), robot appearance (Nao or Stretch), and robot tone (empathetic or serious) impact the perceived quality of care. Results show that participants prefer instrumental to affective touch, view the robot as having greater social attributes (higher warmth, higher competence, and lower discomfort) after robot-initiated touch, are more comfortable interacting with the human-like Nao than the more machine-like Stretch, and favor consistent robot tone and appearance. From these results, we derived three design guidelines for caregiving robots in healthcare settings.
{"title":"Physical Touch from a Robot Caregiver: Examining Factors that Shape Patient Experience","authors":"Alex Mazursky, Madeleine DeVoe, S. Sebo","doi":"10.1109/RO-MAN53752.2022.9900549","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900549","url":null,"abstract":"Robot-initiated touch is a promising mode of expression that would allow robot caregivers to perform physical tasks (instrumental touch) and provide comfort (affective touch) in healthcare settings. To understand the factors that shape how people respond to touch from a robotic caregiver, we conducted a crowdsourced study (N=163) examining how robot-initiated touch (present or absent), the robot’s intent (instrumental or affective), robot appearance (Nao or Stretch), and robot tone (empathetic or serious) impact the perceived quality of care. Results show that participants prefer instrumental to affective touch, view the robot as having greater social attributes (higher warmth, higher competence, and lower discomfort) after robot-initiated touch, are more comfortable interacting with the human-like Nao than the more machine-like Stretch, and favor consistent robot tone and appearance. From these results, we derived three design guidelines for caregiving robots in healthcare settings.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123348916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900695
Alessandra Rossi, Antonio Andriella, Silvia Rossi, C. Torras, G. Alenyà
The success of human-robot interaction is strongly affected by the people’s ability to infer others’ intentions and behaviours, and the level of people’s trust that others will abide by their same principles and social conventions to achieve a common goal. The ability of understanding and reasoning about other agents’ mental states is known as Theory of Mind (ToM). ToM and trust, therefore, are key factors in the positive outcome of human-robot interaction. We believe that a robot endowed with a ToM is able to gain people’s trust, even when this may occasionally make errors.In this work, we present a user study in the field in which participants (N=123) interacted with a robot that may or may not have a ToM, and may or may not exhibit erroneous behaviour. Our findings indicate that a robot with ToM is perceived as more reliable, and they trusted it more than a robot without a ToM even when the robot made errors. Finally, ToM results to be a key driver for tuning people’s trust in the robot even when the initial condition of the interaction changed (i.e., loss and regain of trust in a longer relationship).
{"title":"Evaluating the Effect of Theory of Mind on People’s Trust in a Faulty Robot","authors":"Alessandra Rossi, Antonio Andriella, Silvia Rossi, C. Torras, G. Alenyà","doi":"10.1109/RO-MAN53752.2022.9900695","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900695","url":null,"abstract":"The success of human-robot interaction is strongly affected by the people’s ability to infer others’ intentions and behaviours, and the level of people’s trust that others will abide by their same principles and social conventions to achieve a common goal. The ability of understanding and reasoning about other agents’ mental states is known as Theory of Mind (ToM). ToM and trust, therefore, are key factors in the positive outcome of human-robot interaction. We believe that a robot endowed with a ToM is able to gain people’s trust, even when this may occasionally make errors.In this work, we present a user study in the field in which participants (N=123) interacted with a robot that may or may not have a ToM, and may or may not exhibit erroneous behaviour. Our findings indicate that a robot with ToM is perceived as more reliable, and they trusted it more than a robot without a ToM even when the robot made errors. Finally, ToM results to be a key driver for tuning people’s trust in the robot even when the initial condition of the interaction changed (i.e., loss and regain of trust in a longer relationship).","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129862650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900523
Elmira Yadollahi, Marta Couto, P. Dillenbourg, Ana Paiva
Recent studies with children have pointed out the importance of spatial thinking as an essential factor in determining later success in STEM-related fields. The current study explores the potential of using embodied activities with robots to aid the development of children’s spatial perspective-taking abilities. This research focuses on evaluating children’s spatial perspective-taking abilities and assessing the potential of the designed activity to practice perspective-taking. The activity design is inspired by the dynamic and mental processes involved in remote-controlled cars and racing games, it is developed with a Cozmo robot, and it includes guiding the robot within the maze by considering the robot’s point of view. We evaluated the activity through a user study with 22 elementary school children between the ages of 8 and 9. The findings showed that children’s performance at different angular disparities was aligned with the previous research in developmental psychology. Additionally, most children made fewer mistakes in guiding the robot as they played more. Finally, while we did not observe any performance improvement in the group of children who had access to the robot’s point of view during the game, we learned new insights about how children perceived seeing the maze through the robot’s eyes.
{"title":"Motivating Children to Practice Perspective-Taking Through Playing Games with Cozmo","authors":"Elmira Yadollahi, Marta Couto, P. Dillenbourg, Ana Paiva","doi":"10.1109/RO-MAN53752.2022.9900523","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900523","url":null,"abstract":"Recent studies with children have pointed out the importance of spatial thinking as an essential factor in determining later success in STEM-related fields. The current study explores the potential of using embodied activities with robots to aid the development of children’s spatial perspective-taking abilities. This research focuses on evaluating children’s spatial perspective-taking abilities and assessing the potential of the designed activity to practice perspective-taking. The activity design is inspired by the dynamic and mental processes involved in remote-controlled cars and racing games, it is developed with a Cozmo robot, and it includes guiding the robot within the maze by considering the robot’s point of view. We evaluated the activity through a user study with 22 elementary school children between the ages of 8 and 9. The findings showed that children’s performance at different angular disparities was aligned with the previous research in developmental psychology. Additionally, most children made fewer mistakes in guiding the robot as they played more. Finally, while we did not observe any performance improvement in the group of children who had access to the robot’s point of view during the game, we learned new insights about how children perceived seeing the maze through the robot’s eyes.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132719587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900696
Vinh Phu Nguyen, Thang Duc Tran, I. Kuo
Passing pedestrians is a common task for pairs while moving. Whilst pairs generally prefer Side-by-side walking mode, that mode tends to occupy more space in the pathway and reduces space for pedestrians traveling in the opposite direction than Leader-Follower mode in which one follows the other. Thus, humans often intuitively consider solutions to optimize the balance between side-by-side walking mode and moving space for others in passing. This is also a problem that designers of companion robots often have to solve. By discovering, modeling, and incorporating a new factor - the habit of moving with the flow and density in moving (called dynamic density) - this work proposes a novel model to determine natural navigation pathways for companion robot to pass multiple pedestrians walking in the opposite directions, mimicking human passing behaviors by taking into account this factor. Based on two experimental observations and data collections, the model was developed and then validated by comparing the pathways generated by the model and the natural moving plans of the pairs in the same situations. The simulation results show that the new model is able to determine moving plans of pairs in passing situations, similar to real decisions of humans.
{"title":"A model for determining natural pathways for side-by-side companion robots in passing pedestrian flows using dynamic density","authors":"Vinh Phu Nguyen, Thang Duc Tran, I. Kuo","doi":"10.1109/RO-MAN53752.2022.9900696","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900696","url":null,"abstract":"Passing pedestrians is a common task for pairs while moving. Whilst pairs generally prefer Side-by-side walking mode, that mode tends to occupy more space in the pathway and reduces space for pedestrians traveling in the opposite direction than Leader-Follower mode in which one follows the other. Thus, humans often intuitively consider solutions to optimize the balance between side-by-side walking mode and moving space for others in passing. This is also a problem that designers of companion robots often have to solve. By discovering, modeling, and incorporating a new factor - the habit of moving with the flow and density in moving (called dynamic density) - this work proposes a novel model to determine natural navigation pathways for companion robot to pass multiple pedestrians walking in the opposite directions, mimicking human passing behaviors by taking into account this factor. Based on two experimental observations and data collections, the model was developed and then validated by comparing the pathways generated by the model and the natural moving plans of the pairs in the same situations. The simulation results show that the new model is able to determine moving plans of pairs in passing situations, similar to real decisions of humans.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130940021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900722
Nicole Salomons, Tom Wallenstein, Debasmita Ghose, B. Scassellati
Regular exercise provides many mental and physical health benefits. However, when exercises are done incorrectly, it can lead to injuries. Because the COVID-19 pandemic made it challenging to exercise in communal spaces, the growth of virtual fitness programs was accelerated, putting people at risk of sustaining exercise-related injuries as they received little to no feedback on their exercising techniques. Co-located robots could be one potential enhancement to virtual training programs as they can cause higher learning gains, more compliance, and more enjoyment than non-co-located robots. In this study, we compare the effects of a physically present robot by having a person exercise either with a robot (robot condition) or a video of a robot displayed on a tablet (tablet condition). Participants (N=25) had an exercise system in their homes for two weeks. Participants who exercised with the co-located robot made fewer mistakes than those who exercised with the video-displayed robot. Furthermore, participants in the robot condition reported a higher fitness increase and more motivation to exercise than participants in the tablet condition.
{"title":"The Impact of an In-Home Co-Located Robotic Coach in Helping People Make Fewer Exercise Mistakes","authors":"Nicole Salomons, Tom Wallenstein, Debasmita Ghose, B. Scassellati","doi":"10.1109/RO-MAN53752.2022.9900722","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900722","url":null,"abstract":"Regular exercise provides many mental and physical health benefits. However, when exercises are done incorrectly, it can lead to injuries. Because the COVID-19 pandemic made it challenging to exercise in communal spaces, the growth of virtual fitness programs was accelerated, putting people at risk of sustaining exercise-related injuries as they received little to no feedback on their exercising techniques. Co-located robots could be one potential enhancement to virtual training programs as they can cause higher learning gains, more compliance, and more enjoyment than non-co-located robots. In this study, we compare the effects of a physically present robot by having a person exercise either with a robot (robot condition) or a video of a robot displayed on a tablet (tablet condition). Participants (N=25) had an exercise system in their homes for two weeks. Participants who exercised with the co-located robot made fewer mistakes than those who exercised with the video-displayed robot. Furthermore, participants in the robot condition reported a higher fitness increase and more motivation to exercise than participants in the tablet condition.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132406139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900673
Tom Hitron, Benny Megidish, Etay Todress, Noa Morag, H. Erel
With recent advancements in AI, there are growing concerns about human biases implemented in AI decisions. Threats posed by AI bias may be even more drastic when applied to robots that are perceived as independent entities and are not mediated by humans. Furthermore, technology is typically perceived as objective and there is a risk that people will embrace its decisions without considering possible biases. In order to understand the extent of threats brought about by such biases, we evaluated participants’ responses to a gender-biased robot mediating a debate between two participants (male and female). The vast majority of participants did not associate the robot’s behavior with a bias, despite being informed that the robot’s algorithm is based on human examples. Participants attributed the robot’s decisions to their own performance and used explanations involving gender stereotypes. Our findings suggest that robots’ biased behaviors can serve as validation for common human stereotypes.
{"title":"AI bias in Human-Robot Interaction: An evaluation of the Risk in Gender Biased Robots","authors":"Tom Hitron, Benny Megidish, Etay Todress, Noa Morag, H. Erel","doi":"10.1109/RO-MAN53752.2022.9900673","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900673","url":null,"abstract":"With recent advancements in AI, there are growing concerns about human biases implemented in AI decisions. Threats posed by AI bias may be even more drastic when applied to robots that are perceived as independent entities and are not mediated by humans. Furthermore, technology is typically perceived as objective and there is a risk that people will embrace its decisions without considering possible biases. In order to understand the extent of threats brought about by such biases, we evaluated participants’ responses to a gender-biased robot mediating a debate between two participants (male and female). The vast majority of participants did not associate the robot’s behavior with a bias, despite being informed that the robot’s algorithm is based on human examples. Participants attributed the robot’s decisions to their own performance and used explanations involving gender stereotypes. Our findings suggest that robots’ biased behaviors can serve as validation for common human stereotypes.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130242595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900711
Masaaki Hayashi, Tamon Miyake, Mitsuhiro Kamezaki, J. Yamato, Kyosuke Saito, Taro Hamada, Eriko Sakurai, S. Sugano, J. Ohya
Vision plays an important role in motion planning for mobile robots which coexist with humans. Because a method predicting a pedestrian path with a camera has a trade-off relationship between the calculation speed and accuracy, such a path prediction method is not good at instantaneously detecting multiple people at a distance. In this study, we thus present a method with visual recognition and prediction of transition of human action states to assess the risk of collision for selecting the avoidance target. The proposed system calculates the risk assessment score based on recognition of human body direction, human walking patterns with an object, and face orientation as well as prediction of transition of human action states. First, we investigated the validation of each recognition model, and we confirmed that the proposed system can recognize and predict human actions with high accuracy ahead of 3 m. Then, we compared the risk assessment score with video interviews to ask a human whom a mobile robot should pay attention to, and we found that the proposed system could capture the features of human states that people pay attention to when avoiding collision with other people from vision.
{"title":"Preliminary Investigation of Collision Risk Assessment with Vision for Selecting Targets Paid Attention to by Mobile Robot","authors":"Masaaki Hayashi, Tamon Miyake, Mitsuhiro Kamezaki, J. Yamato, Kyosuke Saito, Taro Hamada, Eriko Sakurai, S. Sugano, J. Ohya","doi":"10.1109/RO-MAN53752.2022.9900711","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900711","url":null,"abstract":"Vision plays an important role in motion planning for mobile robots which coexist with humans. Because a method predicting a pedestrian path with a camera has a trade-off relationship between the calculation speed and accuracy, such a path prediction method is not good at instantaneously detecting multiple people at a distance. In this study, we thus present a method with visual recognition and prediction of transition of human action states to assess the risk of collision for selecting the avoidance target. The proposed system calculates the risk assessment score based on recognition of human body direction, human walking patterns with an object, and face orientation as well as prediction of transition of human action states. First, we investigated the validation of each recognition model, and we confirmed that the proposed system can recognize and predict human actions with high accuracy ahead of 3 m. Then, we compared the risk assessment score with video interviews to ask a human whom a mobile robot should pay attention to, and we found that the proposed system could capture the features of human states that people pay attention to when avoiding collision with other people from vision.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114214038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900621
Isobel Voysey, L. Baillie, Joanne M. Williams, J. Herrmann
Zoomorphic robots are a promising tool for animal welfare education and could be used to teach children that animals have minds and emotions and thereby reduce acceptance of cruelty towards animals. This study investigated the influence of animallike affective non-verbal behavior on children’s perceptions of the attributes and mental abilities of a zoomorphic robot, as well as their acceptance of cruelty towards it. Children who interacted with a robot that displayed animallike affective non-verbal behavior ascribed a significantly higher level of mental abilities. Higher levels of perceived mental abilities were not generally correlated with lower acceptance of cruelty but higher levels of perceived social attributes were. Post-hoc analysis of reasoning given for unacceptability of cruelty found that the group of children who made moral judgments about the cruelty had rated the zoomorphic robot as significantly more animate.
{"title":"Influence of Animallike Affective Non-verbal Behavior on Children’s Perceptions of a Zoomorphic Robot","authors":"Isobel Voysey, L. Baillie, Joanne M. Williams, J. Herrmann","doi":"10.1109/RO-MAN53752.2022.9900621","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900621","url":null,"abstract":"Zoomorphic robots are a promising tool for animal welfare education and could be used to teach children that animals have minds and emotions and thereby reduce acceptance of cruelty towards animals. This study investigated the influence of animallike affective non-verbal behavior on children’s perceptions of the attributes and mental abilities of a zoomorphic robot, as well as their acceptance of cruelty towards it. Children who interacted with a robot that displayed animallike affective non-verbal behavior ascribed a significantly higher level of mental abilities. Higher levels of perceived mental abilities were not generally correlated with lower acceptance of cruelty but higher levels of perceived social attributes were. Post-hoc analysis of reasoning given for unacceptability of cruelty found that the group of children who made moral judgments about the cruelty had rated the zoomorphic robot as significantly more animate.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115020831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}