Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515548
Francesco Esposito, Christian Pek, Michael C. Welle, D. Kragic
Visual planning approaches have shown great success for decision making tasks with no explicit model of the state space. Learning a suitable representation and constructing a latent space where planning can be performed allows non-experts to setup and plan motions by just providing images. However, learned latent spaces are usually not semantically-interpretable, and thus it is difficult to integrate task constraints. We propose a novel framework to determine whether plans satisfy constraints given demonstrations of policies that satisfy or violate the constraints. The demonstrations are realizations of Linear Temporal Logic formulas which are employed to train Long Short-Term Memory (LSTM) networks directly in the latent space representation. We demonstrate that our architecture enables designers to easily specify, compose and integrate task constraints and achieves high performance in terms of accuracy. Furthermore, this visual planning framework enables human interaction, coping the environment changes that a human worker may involve. We show the flexibility of the method on a box pushing task in a simulated warehouse setting with different task constraints.
{"title":"Learning Task Constraints in Visual-Action Planning from Demonstrations","authors":"Francesco Esposito, Christian Pek, Michael C. Welle, D. Kragic","doi":"10.1109/RO-MAN50785.2021.9515548","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515548","url":null,"abstract":"Visual planning approaches have shown great success for decision making tasks with no explicit model of the state space. Learning a suitable representation and constructing a latent space where planning can be performed allows non-experts to setup and plan motions by just providing images. However, learned latent spaces are usually not semantically-interpretable, and thus it is difficult to integrate task constraints. We propose a novel framework to determine whether plans satisfy constraints given demonstrations of policies that satisfy or violate the constraints. The demonstrations are realizations of Linear Temporal Logic formulas which are employed to train Long Short-Term Memory (LSTM) networks directly in the latent space representation. We demonstrate that our architecture enables designers to easily specify, compose and integrate task constraints and achieves high performance in terms of accuracy. Furthermore, this visual planning framework enables human interaction, coping the environment changes that a human worker may involve. We show the flexibility of the method on a box pushing task in a simulated warehouse setting with different task constraints.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"32 1","pages":"131-138"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86601238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515501
Kasper Hald, M. Rehm, T. Moeslund
With increased interest in close-proximity human-robot collaboration in production settings it is important that we understand how robot behaviors and mistakes affect human-robot trust, as a lack of trust can cause loss in productivity and over-trust can lead to hazardous misuse. We designed a system for real-time human-robot trust assessment using a top-down depth camera tracking setup with the goal of using signs of physical apprehension to infer decreases in trust toward the robot. In an experiment with 20 participants we evaluated the tracking system in a repetitive collaborative pick-and-place task where the participant and the robot had to move a set of cones across a table. Midway through the tasks we disrupted the participants expectations by having the robot perform a trust-dampening action. Throughout the tasks we measured the participant’s preferred proximity and their trust toward the robot. Comparing irregular robot movements versus task execution mistakes as well simultaneous versus turn-taking collaboration, we found reported trust was significantly decreased when the robot performed an execution mistake going counter to the shared objective. This decrease was higher for participant working simultaneously as the robot. The effect of the trust-dampening actions on preferred proximity was inconclusive due to unexplained movement trends between tasks throughout the experiment. Despite being given the option to stop the robot in case of abnormal behavior, the trust-dampening actions did not increase the number of participant disruptions for the actions we tested.
{"title":"Human-Robot Trust Assessment Using Top-Down Visual Tracking After Robot Task Execution Mistakes","authors":"Kasper Hald, M. Rehm, T. Moeslund","doi":"10.1109/RO-MAN50785.2021.9515501","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515501","url":null,"abstract":"With increased interest in close-proximity human-robot collaboration in production settings it is important that we understand how robot behaviors and mistakes affect human-robot trust, as a lack of trust can cause loss in productivity and over-trust can lead to hazardous misuse. We designed a system for real-time human-robot trust assessment using a top-down depth camera tracking setup with the goal of using signs of physical apprehension to infer decreases in trust toward the robot. In an experiment with 20 participants we evaluated the tracking system in a repetitive collaborative pick-and-place task where the participant and the robot had to move a set of cones across a table. Midway through the tasks we disrupted the participants expectations by having the robot perform a trust-dampening action. Throughout the tasks we measured the participant’s preferred proximity and their trust toward the robot. Comparing irregular robot movements versus task execution mistakes as well simultaneous versus turn-taking collaboration, we found reported trust was significantly decreased when the robot performed an execution mistake going counter to the shared objective. This decrease was higher for participant working simultaneously as the robot. The effect of the trust-dampening actions on preferred proximity was inconclusive due to unexplained movement trends between tasks throughout the experiment. Despite being given the option to stop the robot in case of abnormal behavior, the trust-dampening actions did not increase the number of participant disruptions for the actions we tested.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"22 1","pages":"892-898"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86216733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515560
G. Sidorenko, Wojciech Mostowski, A. Vinel, J. Sjöberg, M. Cooney
Autonomous vehicles (AVs) are crucial robotic systems that promise to improve our lives via safe, efficient, and inclusive transport–while posing some new challenges for the education of future researchers in the area, that our current research and education might not be ready to deal with: In particular, we don’t know what the AVs of the future will look like, practical learning is restricted due to cost and safety concerns, and a high degree of multidisciplinary knowledge is required. Here, following the broad outline of Active Student Participation theory, we propose a pedagogical approach targeted toward AVs called CAR that combines Creativity theory, Applied demo-oriented learning, and Real world research context. Furthermore, we report on applying the approach to stimulate learning and engagement in a master’s course, in which students freely created a demo with 10 small robots running ROS2 and Ubuntu on Raspberry Pis, in connection to an ongoing research project and a real current problem (SafeSmart and COVID-19). The results suggested the feasibility of the CAR approach for enabling learning, as well as mutual benefits for both the students and researchers involved, and indicated some possibilities for future improvement, toward more effective integration of research experiences into second cycle courses.
{"title":"The CAR Approach: Creative Applied Research Experiences for Master’s Students in Autonomous Platooning","authors":"G. Sidorenko, Wojciech Mostowski, A. Vinel, J. Sjöberg, M. Cooney","doi":"10.1109/RO-MAN50785.2021.9515560","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515560","url":null,"abstract":"Autonomous vehicles (AVs) are crucial robotic systems that promise to improve our lives via safe, efficient, and inclusive transport–while posing some new challenges for the education of future researchers in the area, that our current research and education might not be ready to deal with: In particular, we don’t know what the AVs of the future will look like, practical learning is restricted due to cost and safety concerns, and a high degree of multidisciplinary knowledge is required. Here, following the broad outline of Active Student Participation theory, we propose a pedagogical approach targeted toward AVs called CAR that combines Creativity theory, Applied demo-oriented learning, and Real world research context. Furthermore, we report on applying the approach to stimulate learning and engagement in a master’s course, in which students freely created a demo with 10 small robots running ROS2 and Ubuntu on Raspberry Pis, in connection to an ongoing research project and a real current problem (SafeSmart and COVID-19). The results suggested the feasibility of the CAR approach for enabling learning, as well as mutual benefits for both the students and researchers involved, and indicated some possibilities for future improvement, toward more effective integration of research experiences into second cycle courses.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"150 1","pages":"214-221"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77405520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515317
Youdi Li, E. Sato-Shimokawara, Toru Yamaguchi
Social robots have become pervasive in learning environments. The empirical understanding of how different individuals perceive and react to robot’s expressions has become an urgent necessity for the sustainable deployment. In this study, we examined whether robot’s unexpected actions affect individual cognitive performance. We have presented the experiment in which a robot could use unexpected visual or auditory stimuli and one’s reaction time in the Simon task was recorded for the investigation of the influence from the robot. Results have verified the idea that individual differences exist both in the perception of social robot’s expressions and the extent of change in the cognitive performance. This study provides insights into a richer application of human-robot interaction by taking individual differences regarding perception and response type into account, therefore constitutes a modest but significant step in the direction of adaptive human-robot interaction.
{"title":"The Influence of Robot's Unexpected Behavior on Individual Cognitive Performance","authors":"Youdi Li, E. Sato-Shimokawara, Toru Yamaguchi","doi":"10.1109/RO-MAN50785.2021.9515317","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515317","url":null,"abstract":"Social robots have become pervasive in learning environments. The empirical understanding of how different individuals perceive and react to robot’s expressions has become an urgent necessity for the sustainable deployment. In this study, we examined whether robot’s unexpected actions affect individual cognitive performance. We have presented the experiment in which a robot could use unexpected visual or auditory stimuli and one’s reaction time in the Simon task was recorded for the investigation of the influence from the robot. Results have verified the idea that individual differences exist both in the perception of social robot’s expressions and the extent of change in the cognitive performance. This study provides insights into a richer application of human-robot interaction by taking individual differences regarding perception and response type into account, therefore constitutes a modest but significant step in the direction of adaptive human-robot interaction.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"14 1","pages":"1103-1109"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73313818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515546
Deanna Kocher, Juliette Bendheim, K. Green
We introduce a robotic appendage (a "fin") for a non-humanoid mobile robot that can communicate affect to child collaborators. Affective configurations were generated from a collection of cartoon images that featured characters with floppy or bunny ears. These images were classified according to the six Ekman emotions, analyzed to create ideal emotion configurations, and validated with a user study. From these configurations, we designed multiple continuum robot fin appendages and evaluated them based on (a) their ability to achieve the generated affect configurations, and (b) their durability for sustained use in child-robot interaction studies.
{"title":"Design and Evaluation of an Affective, Continuum Robotic Appendage for Child-Robot Interaction","authors":"Deanna Kocher, Juliette Bendheim, K. Green","doi":"10.1109/RO-MAN50785.2021.9515546","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515546","url":null,"abstract":"We introduce a robotic appendage (a \"fin\") for a non-humanoid mobile robot that can communicate affect to child collaborators. Affective configurations were generated from a collection of cartoon images that featured characters with floppy or bunny ears. These images were classified according to the six Ekman emotions, analyzed to create ideal emotion configurations, and validated with a user study. From these configurations, we designed multiple continuum robot fin appendages and evaluated them based on (a) their ability to achieve the generated affect configurations, and (b) their durability for sustained use in child-robot interaction studies.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"59 1","pages":"586-591"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75643576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515533
Ung Park, Minso Kim, Youngeun Jang, GiJae Lee, Kanggeon Kim, Igil Kim, Jong-suk Choi
A social robot interacts with humans based on social intelligence, for which related applications are being developed across diverse fields to be increasingly integrated in modern society. In this regard, social intelligence and interaction are the keywords of a social robot. Social intelligence refers to the ability to control interactions or thoughts and feelings of relationships with other people; primal empathy, which is the ability to empathize by perceiving emotional signals, among the components of social intelligence was applied to the robot in this study. We proposed that the empathic ability of a social robot can be improved if the social robot can create facial expressions based on the emotional state of a user. Moreover, we suggested a framework of facial expressions for robots. These facial expressions can be repeatedly used in various social robot platforms to achieve such a strategy.
{"title":"Robot Facial Expression Framework for Enhancing Empathy in Human-Robot Interaction","authors":"Ung Park, Minso Kim, Youngeun Jang, GiJae Lee, Kanggeon Kim, Igil Kim, Jong-suk Choi","doi":"10.1109/RO-MAN50785.2021.9515533","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515533","url":null,"abstract":"A social robot interacts with humans based on social intelligence, for which related applications are being developed across diverse fields to be increasingly integrated in modern society. In this regard, social intelligence and interaction are the keywords of a social robot. Social intelligence refers to the ability to control interactions or thoughts and feelings of relationships with other people; primal empathy, which is the ability to empathize by perceiving emotional signals, among the components of social intelligence was applied to the robot in this study. We proposed that the empathic ability of a social robot can be improved if the social robot can create facial expressions based on the emotional state of a user. Moreover, we suggested a framework of facial expressions for robots. These facial expressions can be repeatedly used in various social robot platforms to achieve such a strategy.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"34 1","pages":"832-838"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73666430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515325
A. Ozgur, Hala Khodr, Barbara Bruno, Nicolas Gandar, M. Wessel, F. Hummel, P. Dillenbourg
Gamified rehabilitation tackles the problem of keeping patients engaged in, and motivated to do physical rehabilitation to improve its efficacy. However, with respect to standard rehabilitation, patients are freer to move about and may compensate their motion difficulties with parasite movements, which would greatly reduce the efficacy of the rehabilitation. To identify and characterize compensatory motions, we collected and analyzed video data of people playing the "tangible Pacman" game (an upper-limb rehabilitation game in which a patient moves a semi-passive robot, the "Pacman", on a map to collect 6 apples, while being chased by one or two autonomous robots, the "ghosts"). Participants include 10 healthy elderly adults and 10 chronic stroke patients, who played multiple runs of the game, with different sized maps and various game configurations. By analyzing the video recordings we successfully identified higher shoulder and torso lateral tilt compensation in stroke patients and developed a proof-of-concept compensatory motion detection system which relies on a wearable Inertial Measurement Unit and ROS to provide in-game, real-time visual feedback on compensation.
{"title":"Detecting Compensatory Motions and Providing Informative Feedback During a Tangible Robot Assisted Game for Post-Stroke Rehabilitation","authors":"A. Ozgur, Hala Khodr, Barbara Bruno, Nicolas Gandar, M. Wessel, F. Hummel, P. Dillenbourg","doi":"10.1109/RO-MAN50785.2021.9515325","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515325","url":null,"abstract":"Gamified rehabilitation tackles the problem of keeping patients engaged in, and motivated to do physical rehabilitation to improve its efficacy. However, with respect to standard rehabilitation, patients are freer to move about and may compensate their motion difficulties with parasite movements, which would greatly reduce the efficacy of the rehabilitation. To identify and characterize compensatory motions, we collected and analyzed video data of people playing the \"tangible Pacman\" game (an upper-limb rehabilitation game in which a patient moves a semi-passive robot, the \"Pacman\", on a map to collect 6 apples, while being chased by one or two autonomous robots, the \"ghosts\"). Participants include 10 healthy elderly adults and 10 chronic stroke patients, who played multiple runs of the game, with different sized maps and various game configurations. By analyzing the video recordings we successfully identified higher shoulder and torso lateral tilt compensation in stroke patients and developed a proof-of-concept compensatory motion detection system which relies on a wearable Inertial Measurement Unit and ROS to provide in-game, real-time visual feedback on compensation.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"45 1","pages":"243-249"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82471947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515481
K. Bungert, Lilli Bruckschen, S. Krumpen, Witali Rau, Michael Weinmann, Maren Bennewitz
In this paper, we present a new approach to human-aware robot navigation, which extends our previous proximity-based navigation framework [1] by introducing visibility and predictability as new parameters. We derived these parameters from a user study and incorporated them into a cost function, which models the user’s discomfort with respect to a relative robot position based on proximity, visibility, predictability, and work efficiency. We use this cost function in combination with an A* planner to create a user-preferred robot navigation policy. In comparison to our previous framework, our new cost function results in a 6% increase in social distance compliance, a 6.3% decrease in visibility of the robot as preferred, and an average decrease of orientation changes of 12.6° per meter resulting in better predictability, while maintaining a comparable average path length. We further performed a virtual reality experiment to evaluate the user comfort based on direct human feedback, finding that the participants on average felt comfortable to very comfortable with the resulting robot trajectories from our approach.
{"title":"Human-Aware Robot Navigation Based on Learned Cost Values from User Studies","authors":"K. Bungert, Lilli Bruckschen, S. Krumpen, Witali Rau, Michael Weinmann, Maren Bennewitz","doi":"10.1109/RO-MAN50785.2021.9515481","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515481","url":null,"abstract":"In this paper, we present a new approach to human-aware robot navigation, which extends our previous proximity-based navigation framework [1] by introducing visibility and predictability as new parameters. We derived these parameters from a user study and incorporated them into a cost function, which models the user’s discomfort with respect to a relative robot position based on proximity, visibility, predictability, and work efficiency. We use this cost function in combination with an A* planner to create a user-preferred robot navigation policy. In comparison to our previous framework, our new cost function results in a 6% increase in social distance compliance, a 6.3% decrease in visibility of the robot as preferred, and an average decrease of orientation changes of 12.6° per meter resulting in better predictability, while maintaining a comparable average path length. We further performed a virtual reality experiment to evaluate the user comfort based on direct human feedback, finding that the participants on average felt comfortable to very comfortable with the resulting robot trajectories from our approach.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"29 1","pages":"337-342"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73466520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515498
T. Masuda, T. Tanaka, Ryunosuke Sawahashi, M. Okui, Rie Nishihama, T. Nakamura
In this study, we evaluate the dropping sensation for the development of a wearable lower limb force feedback device that can render both dropping and walking sensations. The developed device can render the dropping sensation at a smaller height than in reality by decelerating and stopping descent during the rendering of the drop image. Considering the user will be walking with the device, a smaller device height leads to better safety. The purpose of this study is to clarify the required specifications of the height of the vertical range of motion of the platform part, and the feasibility of the concept of rendering the dropping sensation. For this purpose, the dropping sensation for the difference in human acceleration time and human deceleration acceleration was evaluated. The results showed that the rendering of the dropping sensation required more than 0.41 s of descent at an acceleration of approximately 1377 mm/s2. Moreover, the dropping sensation and sense of reality were not impaired, even when the platform part of the foot was decelerated. This result indicates that the device can be made smaller.
{"title":"Dropping Sensation for Development of Lower Limb Force Feedback Device","authors":"T. Masuda, T. Tanaka, Ryunosuke Sawahashi, M. Okui, Rie Nishihama, T. Nakamura","doi":"10.1109/RO-MAN50785.2021.9515498","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515498","url":null,"abstract":"In this study, we evaluate the dropping sensation for the development of a wearable lower limb force feedback device that can render both dropping and walking sensations. The developed device can render the dropping sensation at a smaller height than in reality by decelerating and stopping descent during the rendering of the drop image. Considering the user will be walking with the device, a smaller device height leads to better safety. The purpose of this study is to clarify the required specifications of the height of the vertical range of motion of the platform part, and the feasibility of the concept of rendering the dropping sensation. For this purpose, the dropping sensation for the difference in human acceleration time and human deceleration acceleration was evaluated. The results showed that the rendering of the dropping sensation required more than 0.41 s of descent at an acceleration of approximately 1377 mm/s2. Moreover, the dropping sensation and sense of reality were not impaired, even when the platform part of the foot was decelerated. This result indicates that the device can be made smaller.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"9 1","pages":"398-405"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86668494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515513
Glenda Hannibal, A. Weiss, V. Charisi
Ensuring trust in human-robot interaction (HRI) is considered essential for widespread use of robots in society and everyday life. While the majority of studies use game-based and high-risk scenarios with low familiarity to gain a deeper understanding of human trust in robots, scenarios with more subtle trust violations that could happen in everyday life situations are less often considered. In this paper, we present a theory-driven approach to studying the situated trust in HRI by focusing on the experience of vulnerability. Focusing on vulnerability not only challenges previous work on trust in HRI from a theoretical perspective, but is also useful for guiding empirical investigations. As a first proof-of-concept study, we conducted an interactive online survey that demonstrates that it is possible to measure human experience of vulnerability in the ordinary, mundane, and familiar situation of clothes shopping. We conclude that the inclusion of subtle trust violation scenarios occurring in the everyday life situation of clothes shopping enables a better understanding of situated trust in HRI, which is of special importance when considering more near-future applications of robots.
{"title":"\"The robot may not notice my discomfort\" – Examining the Experience of Vulnerability for Trust in Human-Robot Interaction","authors":"Glenda Hannibal, A. Weiss, V. Charisi","doi":"10.1109/RO-MAN50785.2021.9515513","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515513","url":null,"abstract":"Ensuring trust in human-robot interaction (HRI) is considered essential for widespread use of robots in society and everyday life. While the majority of studies use game-based and high-risk scenarios with low familiarity to gain a deeper understanding of human trust in robots, scenarios with more subtle trust violations that could happen in everyday life situations are less often considered. In this paper, we present a theory-driven approach to studying the situated trust in HRI by focusing on the experience of vulnerability. Focusing on vulnerability not only challenges previous work on trust in HRI from a theoretical perspective, but is also useful for guiding empirical investigations. As a first proof-of-concept study, we conducted an interactive online survey that demonstrates that it is possible to measure human experience of vulnerability in the ordinary, mundane, and familiar situation of clothes shopping. We conclude that the inclusion of subtle trust violation scenarios occurring in the everyday life situation of clothes shopping enables a better understanding of situated trust in HRI, which is of special importance when considering more near-future applications of robots.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"118 1","pages":"704-711"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87686925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}