Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900514
Giulia Siri, A. Abubshait, D. D. Tommaso, P. Cardellicchio, A. D’Ausilio, A. Wykowska
With the increasing use of social robots and automated machines in our daily lives, roboticists need to design robots that are suitable for human-robot collaboration. Prior work suggests that robots that are perceived to be intentional (i.e., are able to experience mental life capacities), can, in most cases, positively affect human-robot collaboration. With studies highlighting the importance of individual differences and how they drive our perception. We aimed to investigate how individual differences in gender moderate the relationship between subjective perceptions of robots and behavioral performance in a human-robot collaborative task. Participants rated a humanoid robot (i.e., iCub) on whether it can experience mental life capacities and completed a collaborative task with it. We correlated their subjective ratings with the completion time of the collaborative task and found a positive correlation between perceiving iCub to experience basic and social emotion with their performance (i.e., movement times). This relationship, however, was evident for males but not females. The results of this study suggest that perceiving humanoid robots as capable of experiencing mental states influences collaborative performance differently depending on gender. These findings can be relevant for the field of social robotics and to successfully design robot interaction partners for workplaces.
{"title":"Perceptions of a robot’s mental states influence performance in a collaborative task for males and females differently","authors":"Giulia Siri, A. Abubshait, D. D. Tommaso, P. Cardellicchio, A. D’Ausilio, A. Wykowska","doi":"10.1109/RO-MAN53752.2022.9900514","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900514","url":null,"abstract":"With the increasing use of social robots and automated machines in our daily lives, roboticists need to design robots that are suitable for human-robot collaboration. Prior work suggests that robots that are perceived to be intentional (i.e., are able to experience mental life capacities), can, in most cases, positively affect human-robot collaboration. With studies highlighting the importance of individual differences and how they drive our perception. We aimed to investigate how individual differences in gender moderate the relationship between subjective perceptions of robots and behavioral performance in a human-robot collaborative task. Participants rated a humanoid robot (i.e., iCub) on whether it can experience mental life capacities and completed a collaborative task with it. We correlated their subjective ratings with the completion time of the collaborative task and found a positive correlation between perceiving iCub to experience basic and social emotion with their performance (i.e., movement times). This relationship, however, was evident for males but not females. The results of this study suggest that perceiving humanoid robots as capable of experiencing mental states influences collaborative performance differently depending on gender. These findings can be relevant for the field of social robotics and to successfully design robot interaction partners for workplaces.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114727798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900529
Anja Müller, Lydia Penkert, S. Schneider, A. Richert
Current research on the impact of gender appearance in virtual agents and social robots highlights the danger of transmitting and solidifying existing gender stereotypes. To investigate gender-stereotyped interaction at public spaces in dependency of virtual agents gender, we varied the gender of a virtual agent at a metro station. We used an ethnographic study approach, combining a two-day behavior observation with semi-structured interviews with descriptive and qualitative system log analysis of four weeks. Our results show that topics of conversation differ in dependency of the virtual agents gender: the male virtual agent was asked about topics such as brothels, drugs and alcohol and insulted frequently, while the female one was asked for relationship status or about flirting.
{"title":"Investigating gender-stereotyped interactions with virtual agents in public spaces","authors":"Anja Müller, Lydia Penkert, S. Schneider, A. Richert","doi":"10.1109/RO-MAN53752.2022.9900529","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900529","url":null,"abstract":"Current research on the impact of gender appearance in virtual agents and social robots highlights the danger of transmitting and solidifying existing gender stereotypes. To investigate gender-stereotyped interaction at public spaces in dependency of virtual agents gender, we varied the gender of a virtual agent at a metro station. We used an ethnographic study approach, combining a two-day behavior observation with semi-structured interviews with descriptive and qualitative system log analysis of four weeks. Our results show that topics of conversation differ in dependency of the virtual agents gender: the male virtual agent was asked about topics such as brothels, drugs and alcohol and insulted frequently, while the female one was asked for relationship status or about flirting.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132218866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900647
M. V. Otterdijk, Heqiu Song, K. Tsiakas, Ilka van Zeijl, E. Barakova
In social robotics, where people and robots interact in a social context, robot personality design is critical. Through voice, words, gestures, and nonverbal clues, social robots with expressive behaviors can display human-like actions, and the robot’s personality will ensure consistency. This research aims to create robot personalities expressed only by nonverbal cues. Differently from existing studies that test expressive behaviors with non-specialized participants, we look at how and why human movement analysts perceive distinct personalities in robots (introvert vs. extrovert) based on the robot’s movement and other dynamic features, such as joint position, head, and torso position, voice pitch, speed, and so on. We report the findings of a thematic analysis of the data obtained during a focus group with movement analysis experts who watched Pepper robot behaviors designed to be extrovert and introvert. Our findings lead to new guidelines for designing different robot movement features, including body symmetry, personality trait consistency, and social cue congruence during an interaction, all emphasized by the movement analyzers. Finally, we summarize the design principles for extrovert and introvert robot behaviors based on the combined findings of the focus group data analysis and literature review.
{"title":"Nonverbal Cues Expressing Robot Personality - A Movement Analysts Perspective","authors":"M. V. Otterdijk, Heqiu Song, K. Tsiakas, Ilka van Zeijl, E. Barakova","doi":"10.1109/RO-MAN53752.2022.9900647","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900647","url":null,"abstract":"In social robotics, where people and robots interact in a social context, robot personality design is critical. Through voice, words, gestures, and nonverbal clues, social robots with expressive behaviors can display human-like actions, and the robot’s personality will ensure consistency. This research aims to create robot personalities expressed only by nonverbal cues. Differently from existing studies that test expressive behaviors with non-specialized participants, we look at how and why human movement analysts perceive distinct personalities in robots (introvert vs. extrovert) based on the robot’s movement and other dynamic features, such as joint position, head, and torso position, voice pitch, speed, and so on. We report the findings of a thematic analysis of the data obtained during a focus group with movement analysis experts who watched Pepper robot behaviors designed to be extrovert and introvert. Our findings lead to new guidelines for designing different robot movement features, including body symmetry, personality trait consistency, and social cue congruence during an interaction, all emphasized by the movement analyzers. Finally, we summarize the design principles for extrovert and introvert robot behaviors based on the combined findings of the focus group data analysis and literature review.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"78 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128690867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900747
Adi Manor, Benny Megidish, Etay Todress, M. Mikulincer, H. Erel
Having a sense of security is considered a basic human emotional need. It increases confidence, encourages exploration, and enhances relationships with others. In this study we tested the possibility of leveraging the interaction with a simple non-humanoid robot for increasing participants’ sense of security. The robotic behavior was designed with a psychology expert in attachment theory and was translated into the robot’s morphology by an animator. Specifically, the robot was designed to be attentive and responsive using lean, gaze and nodding gestures. We compared participants’ experience in the secure condition to the experience of participants who interacted with a non-responsive robot. We further compared the participants’ implicit sense of security between the robotic conditions and an additional baseline condition in which participants did not interact with the robot. Our findings indicate the potential in leveraging a simple non-humanoid robot for enhancing humans’ sense of security.
{"title":"A Non-Humanoid Robotic Object for Providing a Sense Of Security","authors":"Adi Manor, Benny Megidish, Etay Todress, M. Mikulincer, H. Erel","doi":"10.1109/RO-MAN53752.2022.9900747","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900747","url":null,"abstract":"Having a sense of security is considered a basic human emotional need. It increases confidence, encourages exploration, and enhances relationships with others. In this study we tested the possibility of leveraging the interaction with a simple non-humanoid robot for increasing participants’ sense of security. The robotic behavior was designed with a psychology expert in attachment theory and was translated into the robot’s morphology by an animator. Specifically, the robot was designed to be attentive and responsive using lean, gaze and nodding gestures. We compared participants’ experience in the secure condition to the experience of participants who interacted with a non-responsive robot. We further compared the participants’ implicit sense of security between the robotic conditions and an additional baseline condition in which participants did not interact with the robot. Our findings indicate the potential in leveraging a simple non-humanoid robot for enhancing humans’ sense of security.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"241 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133748671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900857
Kantwon Rogers, A. Howard
If a robot tells you it can lie for your benefit, how would that change how you perceive it? This paper presents a mixed-methods empirical study that investigates how disclosure of deceptive or honest capabilities influences the perceived social intelligence and construal level of a robot. We first conduct a study with 198 Mechanical Turk participants, and then a replication of it with 15 undergraduate students in order to gain qualitative data. Our results show that how a robot introduces itself can have noticeable effects on how it is perceived–even from just one exposure. In particular, when revealing having ability to lie when it believes it is in the best interest of a human, people noticeably find the robot to be less trustworthy than a robot that conceals any honesty aspects or reveals total truthfulness. Moreover, robots that are forthcoming with their truthful abilities are seen in a lower construal than one that is transparent about its deceptive abilities. These results add much needed knowledge to the understudied area of robot deception and could inform designers and policy makers of future practices when considering deploying robots that deceive.
{"title":"Exploring First Impressions of the Perceived Social Intelligence and Construal Level of Robots that Disclose their Ability to Deceive","authors":"Kantwon Rogers, A. Howard","doi":"10.1109/RO-MAN53752.2022.9900857","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900857","url":null,"abstract":"If a robot tells you it can lie for your benefit, how would that change how you perceive it? This paper presents a mixed-methods empirical study that investigates how disclosure of deceptive or honest capabilities influences the perceived social intelligence and construal level of a robot. We first conduct a study with 198 Mechanical Turk participants, and then a replication of it with 15 undergraduate students in order to gain qualitative data. Our results show that how a robot introduces itself can have noticeable effects on how it is perceived–even from just one exposure. In particular, when revealing having ability to lie when it believes it is in the best interest of a human, people noticeably find the robot to be less trustworthy than a robot that conceals any honesty aspects or reveals total truthfulness. Moreover, robots that are forthcoming with their truthful abilities are seen in a lower construal than one that is transparent about its deceptive abilities. These results add much needed knowledge to the understudied area of robot deception and could inform designers and policy makers of future practices when considering deploying robots that deceive.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133114576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900536
Yuri Ouchi, F. Tanaka
The clarity of the position of the eyes and pupils in canid species varies depending on their sociality. Existing social robots, on the other hand, have a wide range of applications, including education, medical care, and customer service, while most of the robots have had a simple design of eyes with a white base and a black iris. However, as in canid species, it is conceivable that the level of appropriate communication intensity varies depending on the purpose of a robot, and that there is a suitable eye design depending on its use. In this study, we propose the appropriate gaze clarity of a robot for three situations where the avatar speaks to the human, the avatar listens to the human's talk, and the avatar is nearby the human during the task requiring concentration.
{"title":"Eye Design of Social Robots Inspired by the Difference of Gaze Clarity in Canid Species","authors":"Yuri Ouchi, F. Tanaka","doi":"10.1109/RO-MAN53752.2022.9900536","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900536","url":null,"abstract":"The clarity of the position of the eyes and pupils in canid species varies depending on their sociality. Existing social robots, on the other hand, have a wide range of applications, including education, medical care, and customer service, while most of the robots have had a simple design of eyes with a white base and a black iris. However, as in canid species, it is conceivable that the level of appropriate communication intensity varies depending on the purpose of a robot, and that there is a suitable eye design depending on its use. In this study, we propose the appropriate gaze clarity of a robot for three situations where the avatar speaks to the human, the avatar listens to the human's talk, and the avatar is nearby the human during the task requiring concentration.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133267030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900576
Ilenia Cucciniello, Gianluca L'Arco, Alessandra Rossi, C. Autorino, Giuseppe Santoro, Silvia Rossi
Educational robotics is a field aiming at investigating the use of robots in schools to support teaching and learning activities. While several robotic solutions exist in support of the STEM teaching activities, in this work, we present "Classmate Robot" as a new social robot to be used in the classrooms as a support to the learning experience through interaction. Classmate Robot has been designed and developed to improve the effectiveness of the activities by providing a framework where the robot’s behaviors can be personalized, and learning applications can be easily integrated on top of the robot interaction capabilities. This approach aims to increase the engagement of learners. We introduce the ROS-based architecture developed that is divided into three main layers plus an application layer. As a social robot, it combines several multimodal social cues to interact and communicate with students and teachers. Moreover, the robot is endowed with a set of behaviors designed to be compliant with its role of "classmate" in the interaction with the students.
{"title":"ClassMate Robot: A Robot to Support Teaching and Learning Activities in Schools","authors":"Ilenia Cucciniello, Gianluca L'Arco, Alessandra Rossi, C. Autorino, Giuseppe Santoro, Silvia Rossi","doi":"10.1109/RO-MAN53752.2022.9900576","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900576","url":null,"abstract":"Educational robotics is a field aiming at investigating the use of robots in schools to support teaching and learning activities. While several robotic solutions exist in support of the STEM teaching activities, in this work, we present \"Classmate Robot\" as a new social robot to be used in the classrooms as a support to the learning experience through interaction. Classmate Robot has been designed and developed to improve the effectiveness of the activities by providing a framework where the robot’s behaviors can be personalized, and learning applications can be easily integrated on top of the robot interaction capabilities. This approach aims to increase the engagement of learners. We introduce the ROS-based architecture developed that is divided into three main layers plus an application layer. As a social robot, it combines several multimodal social cues to interact and communicate with students and teachers. Moreover, the robot is endowed with a set of behaviors designed to be compliant with its role of \"classmate\" in the interaction with the students.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133382879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900676
Ada V Taylor, Elizabeth Mamantov, H. Admoni
We designed an observer-aware method for creating navigation paths that simultaneously indicate a robot’s goal while attempting to remain in view for a particular observer. Prior art in legible motion does not account for the limited field of view of observers, which can lead to wasted communication efforts that are unobserved by the intended audience. Our observer-aware legibility algorithm directly models the locations and perspectives of observers, and places legible movements where they can be easily seen. To explore the effectiveness of this technique, we performed a 300-person online user study. Users viewed first-person videos of restaurant scenes with robot waiters moving along paths optimized for different observer perspectives, along with a baseline path that did not take into account any observer’s field of view. Participants were asked to report their estimate of how likely it was the robot was heading to their table versus the other goal table as it moved along each path. We found that for observers with incomplete views of the restaurant, observer-aware legibility is effective at increasing the period of time for which observers correctly infer the goal of the robot. Non-targeted observers have lower performance on paths created for other observers than themselves, which is the natural drawback of personalizing legible motion to a particular observer. We also find that an observer’s relationship to the environment (e.g. what is in their field of view) has more influence on their inferences than the observer’s relative position to the targeted observer, and discuss how this implies knowledge of the environment is required in order to effectively plan for multiple observers at once.
{"title":"Observer-Aware Legibility for Social Navigation","authors":"Ada V Taylor, Elizabeth Mamantov, H. Admoni","doi":"10.1109/RO-MAN53752.2022.9900676","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900676","url":null,"abstract":"We designed an observer-aware method for creating navigation paths that simultaneously indicate a robot’s goal while attempting to remain in view for a particular observer. Prior art in legible motion does not account for the limited field of view of observers, which can lead to wasted communication efforts that are unobserved by the intended audience. Our observer-aware legibility algorithm directly models the locations and perspectives of observers, and places legible movements where they can be easily seen. To explore the effectiveness of this technique, we performed a 300-person online user study. Users viewed first-person videos of restaurant scenes with robot waiters moving along paths optimized for different observer perspectives, along with a baseline path that did not take into account any observer’s field of view. Participants were asked to report their estimate of how likely it was the robot was heading to their table versus the other goal table as it moved along each path. We found that for observers with incomplete views of the restaurant, observer-aware legibility is effective at increasing the period of time for which observers correctly infer the goal of the robot. Non-targeted observers have lower performance on paths created for other observers than themselves, which is the natural drawback of personalizing legible motion to a particular observer. We also find that an observer’s relationship to the environment (e.g. what is in their field of view) has more influence on their inferences than the observer’s relative position to the targeted observer, and discuss how this implies knowledge of the environment is required in order to effectively plan for multiple observers at once.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"84 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133141809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900532
Unai Zabala, I. Rodriguez, E. Lazkano
Natural gesturing is very important for the credibility of social robots. It is even more crucial for storytelling robots since the expression, emotion and emphasis must be highlighted. In this paper we propose a hybrid gesture generation approach for a storytelling robot that combines beats automatically generated by a GAN with a probabilistic semantic related gesture insertion system. Beats are executed according to a probability based on the duration of the sentences and semantic gesture insertions are dependent of the previous occurrences of the gestures associated to the words. The polarity of the text is extracted and affects several features of the motion to arouse emotion. A qualitative evaluation of robot behavior is conducted and confirms the approach as a promising one as storytelling system.
{"title":"Towards an automatic generation of natural gestures for a storyteller robot","authors":"Unai Zabala, I. Rodriguez, E. Lazkano","doi":"10.1109/RO-MAN53752.2022.9900532","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900532","url":null,"abstract":"Natural gesturing is very important for the credibility of social robots. It is even more crucial for storytelling robots since the expression, emotion and emphasis must be highlighted. In this paper we propose a hybrid gesture generation approach for a storytelling robot that combines beats automatically generated by a GAN with a probabilistic semantic related gesture insertion system. Beats are executed according to a probability based on the duration of the sentences and semantic gesture insertions are dependent of the previous occurrences of the gestures associated to the words. The polarity of the text is extracted and affects several features of the motion to arouse emotion. A qualitative evaluation of robot behavior is conducted and confirms the approach as a promising one as storytelling system.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"151 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125887290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900795
Kasra Mokhtari, Alan R. Wagner
Risk is traditionally described as the expected likelihood of an undesirable outcome, such as a collision for an autonomous vehicle. Accurately predicting risk or potentially risky situations is critical for the safe operation of an autonomous vehicle. This work combines use of a controller trained to navigate around individuals in a crowd and a risk-based decision-making framework for an autonomous vehicle that integrates high-level risk-based path planning with a reinforcement learning-based low-level control. We evaluated our method using a high-fidelity simulation environment. We show our method results in zero collisions with pedestrians and predicted the least risky path, time to travel, or day to travel in approximately 72% of traversals. This work can improve safety by allowing an autonomous vehicle to one day avoid and react to risky situations.
{"title":"Don’t Get into Trouble! Risk-aware Decision-Making for Autonomous Vehicles","authors":"Kasra Mokhtari, Alan R. Wagner","doi":"10.1109/RO-MAN53752.2022.9900795","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900795","url":null,"abstract":"Risk is traditionally described as the expected likelihood of an undesirable outcome, such as a collision for an autonomous vehicle. Accurately predicting risk or potentially risky situations is critical for the safe operation of an autonomous vehicle. This work combines use of a controller trained to navigate around individuals in a crowd and a risk-based decision-making framework for an autonomous vehicle that integrates high-level risk-based path planning with a reinforcement learning-based low-level control. We evaluated our method using a high-fidelity simulation environment. We show our method results in zero collisions with pedestrians and predicted the least risky path, time to travel, or day to travel in approximately 72% of traversals. This work can improve safety by allowing an autonomous vehicle to one day avoid and react to risky situations.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126082271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}