Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900730
M. Kirtay, Erhan Öztop, A. Kuhlen, M. Asada, V. Hafner
In this study, we extend our robot trust model into a multimodal setting in which the Nao robot leverages audio-visual data to perform a sequential multimodal pattern recalling task while interacting with a human partner who has different guiding strategies: reliable, unreliable, and random. Here, the humanoid robot is equipped with a multimodal auto-associative memory module to process audio-visual patterns to extract cognitive load (i.e., computational cost) and an internal reward module to perform cost-guided reinforcement learning. After interactive experiments, the robot associates a low cognitive load (i.e., high cumulative reward) yielded during the interaction with high trustworthiness of the guiding strategy of the partner. At the end of the experiment, we provide a free choice to the robot to select a trustworthy instructor. We show that the robot forms trust in a reliable partner. In the second setting of the same experiment, we endow the robot with an additional simple theory of mind module to assess the efficacy of the instructor in helping the robot perform the task. Our results show that the performance of the robot is improved when the robot bases its action decisions on factoring in the instructor assessment.
{"title":"Trustworthiness assessment in multimodal human-robot interaction based on cognitive load","authors":"M. Kirtay, Erhan Öztop, A. Kuhlen, M. Asada, V. Hafner","doi":"10.1109/RO-MAN53752.2022.9900730","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900730","url":null,"abstract":"In this study, we extend our robot trust model into a multimodal setting in which the Nao robot leverages audio-visual data to perform a sequential multimodal pattern recalling task while interacting with a human partner who has different guiding strategies: reliable, unreliable, and random. Here, the humanoid robot is equipped with a multimodal auto-associative memory module to process audio-visual patterns to extract cognitive load (i.e., computational cost) and an internal reward module to perform cost-guided reinforcement learning. After interactive experiments, the robot associates a low cognitive load (i.e., high cumulative reward) yielded during the interaction with high trustworthiness of the guiding strategy of the partner. At the end of the experiment, we provide a free choice to the robot to select a trustworthy instructor. We show that the robot forms trust in a reliable partner. In the second setting of the same experiment, we endow the robot with an additional simple theory of mind module to assess the efficacy of the instructor in helping the robot perform the task. Our results show that the performance of the robot is improved when the robot bases its action decisions on factoring in the instructor assessment.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128811349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900854
Alexander Tyshka, W. Louie
Robot-mediated therapy is an emerging field of research seeking to improve therapy for children with Autism Spectrum Disorder (ASD). Current approaches to autonomous robot-mediated therapy often focus on having a robot teach a single skill to children with ASD and lack a personalized approach to each individual. More recently, Learning from Demonstration (LfD) approaches are being explored to teach socially assistive robots to deliver personalized interventions after they have been deployed but these approaches require large amounts of demonstrations and utilize learning models that cannot be easily interpreted. In this work, we present a LfD system capable of learning the delivery of autism therapies in a data-efficient manner utilizing learning models that are inherently interpretable. The LfD system learns a behavioral model of the task with minimal supervision via hierarchical clustering and then learns an interpretable policy to determine when to execute the learned behaviors. The system is able to learn from less than an hour of demonstrations and for each of its predictions can identify demonstrated instances that contributed to its decision. The system performs well under unsupervised conditions and achieves even better performance with a low-effort human correction process that is enabled by the interpretable model.
{"title":"Transparent Learning from Demonstration for Robot-Mediated Therapy","authors":"Alexander Tyshka, W. Louie","doi":"10.1109/RO-MAN53752.2022.9900854","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900854","url":null,"abstract":"Robot-mediated therapy is an emerging field of research seeking to improve therapy for children with Autism Spectrum Disorder (ASD). Current approaches to autonomous robot-mediated therapy often focus on having a robot teach a single skill to children with ASD and lack a personalized approach to each individual. More recently, Learning from Demonstration (LfD) approaches are being explored to teach socially assistive robots to deliver personalized interventions after they have been deployed but these approaches require large amounts of demonstrations and utilize learning models that cannot be easily interpreted. In this work, we present a LfD system capable of learning the delivery of autism therapies in a data-efficient manner utilizing learning models that are inherently interpretable. The LfD system learns a behavioral model of the task with minimal supervision via hierarchical clustering and then learns an interpretable policy to determine when to execute the learned behaviors. The system is able to learn from less than an hour of demonstrations and for each of its predictions can identify demonstrated instances that contributed to its decision. The system performs well under unsupervised conditions and achieves even better performance with a low-effort human correction process that is enabled by the interpretable model.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115943517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900776
Rupal Srivastava, Maulshree Singh, Guilherme Daniel Gomes, Niall Murray, D. Devine
The conventional smart gloves present a challenge regarding their portability as most work on gesture recognition techniques based on vision sensing and image processing. The multiple algorithms and signal filtering further make the overall process cumbersome. This work proposes a Shape Memory Alloy (SMA) integrated sensing mechanism in a smart glove for autonomous control. A novel hand gesture recognition technology is developed using kinaesthetic feedback from the finger joint movements. The paper presents a smart glove with an external SMA embedded tubing attachment for the thumb, index, and middle fingers. The motion of the SMA wires is constrained between a fixed end on the tip of the fingers, and the other end is connected to a linear position sensor with spring feedback. The SMA wires in this design exist in their Austenite phase at room temperature, thus exhibiting superelastic or pseudoelastic behavior. The tension in the SMA wire is observed and measured upon bending the fingers, corresponding to the mechanical travel in the linear position sensor. The individual and a combination of position sensor readings are then used as commands for actuating interactive toys. Using a three-finger approach, one can extract seven commands depending upon single or multiple finger movements. This data is further used to actuate the toys, and a use-case for cobotic application is proposed to help better understand interactive play, hand-eye coordination, and thus early cognitive development in children with Autism Spectrum Disorder (ASD). The discrete data output with binary data is independent of other devices or heavy data processing requirements, thus making the proposed novel SM-EXO a better alternative for non-portable and complex smart gloves.
{"title":"SM-EXO: Shape Memory alloy-based Hand EXOskeleton for Cobotic Application","authors":"Rupal Srivastava, Maulshree Singh, Guilherme Daniel Gomes, Niall Murray, D. Devine","doi":"10.1109/RO-MAN53752.2022.9900776","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900776","url":null,"abstract":"The conventional smart gloves present a challenge regarding their portability as most work on gesture recognition techniques based on vision sensing and image processing. The multiple algorithms and signal filtering further make the overall process cumbersome. This work proposes a Shape Memory Alloy (SMA) integrated sensing mechanism in a smart glove for autonomous control. A novel hand gesture recognition technology is developed using kinaesthetic feedback from the finger joint movements. The paper presents a smart glove with an external SMA embedded tubing attachment for the thumb, index, and middle fingers. The motion of the SMA wires is constrained between a fixed end on the tip of the fingers, and the other end is connected to a linear position sensor with spring feedback. The SMA wires in this design exist in their Austenite phase at room temperature, thus exhibiting superelastic or pseudoelastic behavior. The tension in the SMA wire is observed and measured upon bending the fingers, corresponding to the mechanical travel in the linear position sensor. The individual and a combination of position sensor readings are then used as commands for actuating interactive toys. Using a three-finger approach, one can extract seven commands depending upon single or multiple finger movements. This data is further used to actuate the toys, and a use-case for cobotic application is proposed to help better understand interactive play, hand-eye coordination, and thus early cognitive development in children with Autism Spectrum Disorder (ASD). The discrete data output with binary data is independent of other devices or heavy data processing requirements, thus making the proposed novel SM-EXO a better alternative for non-portable and complex smart gloves.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117126007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900565
E. Song, Yeo-Il Yun, S. Lee, J. Koo
As a collaboration between humans and robots becomes critical, stiffness control of the robot is essential for stability and efficiency of work. Therefore, research on the variable stiffness mechanism is being actively conducted in service robots, soft robots, and exoskeletons. The main types of variable stiffness mechanisms are jamming effect (particle jamming and layer jamming), shape memory polymer (SMP), and low melting point alloy (LMPA). The case of the jamming effect uses negative pneumatic pressure to change the stiffness. Because of that, it is possible to change the stiffness quickly, and it is easy to manufacture. However, both SMP and LMPA use thermal energy to increase the material’s stiffness. There is a risk of damage to humans or robots, and it takes much time to change the stiffness. Therefore, this study introduces a variable stiffness mechanism that combines particle jamming and core-frame. In addition, optimization studies are being conducted to use the jamming effect in industries. However, due to the randomness of particle jamming, the existing studies assumed that the variable stiffness mechanism was a simple beam or modeled it using hook’s law, so the accuracy was low. Therefore, in this study, five design variables are selected for particle and core-frame, the main elements constituting the variable stiffness mechanism. In addition, design variables are optimized through various FEM simulations. Furthermore, the simulation is proved by establishing a theoretical model for variable stiffness structure when the jamming effect occurs. Finally, the optimization of five design variables is proved through experiments.
{"title":"Structural optimization of variable stiffness mechanism with particle jamming and core-frame","authors":"E. Song, Yeo-Il Yun, S. Lee, J. Koo","doi":"10.1109/RO-MAN53752.2022.9900565","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900565","url":null,"abstract":"As a collaboration between humans and robots becomes critical, stiffness control of the robot is essential for stability and efficiency of work. Therefore, research on the variable stiffness mechanism is being actively conducted in service robots, soft robots, and exoskeletons. The main types of variable stiffness mechanisms are jamming effect (particle jamming and layer jamming), shape memory polymer (SMP), and low melting point alloy (LMPA). The case of the jamming effect uses negative pneumatic pressure to change the stiffness. Because of that, it is possible to change the stiffness quickly, and it is easy to manufacture. However, both SMP and LMPA use thermal energy to increase the material’s stiffness. There is a risk of damage to humans or robots, and it takes much time to change the stiffness. Therefore, this study introduces a variable stiffness mechanism that combines particle jamming and core-frame. In addition, optimization studies are being conducted to use the jamming effect in industries. However, due to the randomness of particle jamming, the existing studies assumed that the variable stiffness mechanism was a simple beam or modeled it using hook’s law, so the accuracy was low. Therefore, in this study, five design variables are selected for particle and core-frame, the main elements constituting the variable stiffness mechanism. In addition, design variables are optimized through various FEM simulations. Furthermore, the simulation is proved by establishing a theoretical model for variable stiffness structure when the jamming effect occurs. Finally, the optimization of five design variables is proved through experiments.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116194090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900583
Claudio Coppola, Gokhan Solak, L. Jamone
Using robot manipulators in contexts where it is undesirable or impractical for humans to physically intervene is crucial for several applications, from manufacturing to extreme environments. However, robots require a high degree of intelligence to operate in those environments, especially if they are not fully structured. Teleoperation compensates for this limitation by connecting the human operator to the robot using human-robot interfaces. The remotely operated sessions can also be used as demonstrations to program more powerful autonomous agents. In this article, we report a thorough user study to characterise the effect of simple vibrotactile feedback on the performance and cognitive load of the human user in performing teleoperated grasping and manipulation tasks. The experiments are performed using a portable and affordable bilateral teleoperation system that we designed, composed of a Leap Motion sensor and a custom-designed vibrotactile haptic glove to operate a 4-fingered robot hand equipped with 3-axis force sensors on the fingertips; the software packages we developed are open-source and publicly available. Our results show that vibrotactile feedback improves teleoperation and reduces cognitive load, especially for complex in-hand manipulation tasks.
{"title":"An affordable system for the teleoperation of dexterous robotic hands using Leap Motion hand tracking and vibrotactile feedback","authors":"Claudio Coppola, Gokhan Solak, L. Jamone","doi":"10.1109/RO-MAN53752.2022.9900583","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900583","url":null,"abstract":"Using robot manipulators in contexts where it is undesirable or impractical for humans to physically intervene is crucial for several applications, from manufacturing to extreme environments. However, robots require a high degree of intelligence to operate in those environments, especially if they are not fully structured. Teleoperation compensates for this limitation by connecting the human operator to the robot using human-robot interfaces. The remotely operated sessions can also be used as demonstrations to program more powerful autonomous agents. In this article, we report a thorough user study to characterise the effect of simple vibrotactile feedback on the performance and cognitive load of the human user in performing teleoperated grasping and manipulation tasks. The experiments are performed using a portable and affordable bilateral teleoperation system that we designed, composed of a Leap Motion sensor and a custom-designed vibrotactile haptic glove to operate a 4-fingered robot hand equipped with 3-axis force sensors on the fingertips; the software packages we developed are open-source and publicly available. Our results show that vibrotactile feedback improves teleoperation and reduces cognitive load, especially for complex in-hand manipulation tasks.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125264730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900671
Grimaldo Silva, K. Rekik, A. Kanso, L. Schnitman
As the world surpasses a billion cameras [1] and their coverage of the public and private spaces increases, the possibility of using their visual feed to not just observe, but to command robots through their video becomes an ever more interesting prospect. Our work deals with multi-perspective interaction, where a robot autonomously maps image pixels from reachable cameras to positions on its global coordinate space. This enables an operator to send the robot to specific positions in a camera with no manual calibration. Furthermore, robot information, such as planned paths, can be used to augment all affected camera images with an overlayed projection of their visual information. The robustness of this approach has been validated in both simulated and real world experiments.
{"title":"Multi-perspective human robot interaction through an augmented video interface supported by deep learning","authors":"Grimaldo Silva, K. Rekik, A. Kanso, L. Schnitman","doi":"10.1109/RO-MAN53752.2022.9900671","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900671","url":null,"abstract":"As the world surpasses a billion cameras [1] and their coverage of the public and private spaces increases, the possibility of using their visual feed to not just observe, but to command robots through their video becomes an ever more interesting prospect. Our work deals with multi-perspective interaction, where a robot autonomously maps image pixels from reachable cameras to positions on its global coordinate space. This enables an operator to send the robot to specific positions in a camera with no manual calibration. Furthermore, robot information, such as planned paths, can be used to augment all affected camera images with an overlayed projection of their visual information. The robustness of this approach has been validated in both simulated and real world experiments.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126959504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900750
Inwoo Kim, H. Jung, Jongkyu Kim, Sihwan Kim, Jong-Myung Park, Soo-Hong Lee
The ankle plays a very large role as an end effector in gait and leg erection. As the number of people with reduced mobility in the ankle joint due to aging and nerve damage increases, rehabilitation and related research are steadily increasing. However, most studies overlook the eversion action that plays an important role in stability. In this study, an intention detection algorithm including the eversion motion was developed, and a multi-channel EMG sensor module was developed and utilized. By moving the ankle in a specific direction, 36 channels of EMG signals were measured to determine the correlation between ankle motion and EMG signals. CNN and ADAM were used for algorithm production, and ankle motion was estimated with high accuracy.
{"title":"Ankle Intention Detection Algorithm with HD-EMG Sensor","authors":"Inwoo Kim, H. Jung, Jongkyu Kim, Sihwan Kim, Jong-Myung Park, Soo-Hong Lee","doi":"10.1109/RO-MAN53752.2022.9900750","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900750","url":null,"abstract":"The ankle plays a very large role as an end effector in gait and leg erection. As the number of people with reduced mobility in the ankle joint due to aging and nerve damage increases, rehabilitation and related research are steadily increasing. However, most studies overlook the eversion action that plays an important role in stability. In this study, an intention detection algorithm including the eversion motion was developed, and a multi-channel EMG sensor module was developed and utilized. By moving the ankle in a specific direction, 36 channels of EMG signals were measured to determine the correlation between ankle motion and EMG signals. CNN and ADAM were used for algorithm production, and ankle motion was estimated with high accuracy.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121903700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900788
Laura Kunold
A pre-registered conceptual video-based replication of a laboratory experiment was conducted to test whether the impact of a robot’s non-functional touch to a human can be studied from observation (online). Therefore, n=92 participants watched either a video recording of the same human–robot interaction with or without touch. The interpretation, evaluation, and emotional as well as behavioral responses were collected by means of an online-survey. The results show that the observation of touch affects observers’ emotional state: Contrary to what was hypothesized, observers felt significantly better when no touch was visible and they evaluated the robot’s touch as inappropriate. The findings are compared to results from a laboratory experiment to raise awareness for the different perspectives involved in observing and experiencing touch.
{"title":"Seeing is not Feeling the Touch from a Robot *","authors":"Laura Kunold","doi":"10.1109/RO-MAN53752.2022.9900788","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900788","url":null,"abstract":"A pre-registered conceptual video-based replication of a laboratory experiment was conducted to test whether the impact of a robot’s non-functional touch to a human can be studied from observation (online). Therefore, n=92 participants watched either a video recording of the same human–robot interaction with or without touch. The interpretation, evaluation, and emotional as well as behavioral responses were collected by means of an online-survey. The results show that the observation of touch affects observers’ emotional state: Contrary to what was hypothesized, observers felt significantly better when no touch was visible and they evaluated the robot’s touch as inappropriate. The findings are compared to results from a laboratory experiment to raise awareness for the different perspectives involved in observing and experiencing touch.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121797475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900830
Glenda Hannibal, Anna Dobrosovestnova, A. Weiss
Focusing on human experience of vulnerability in everyday life interaction scenarios is still a novel approach. So far, only a proof-of-concept online study has been conducted, and to extend this work, we present a follow-up online study. We consider in more detail how human experience of vulnerability caused by a trust violation through a privacy breach affects trust ratings in an interaction scenario with the PEPPER robot assisting with clothes shopping. We report the results from 32 survey responses and 11 semi-structured interviews. Our findings reveal the existence of the privacy paradox also for studying trust in HRI, which is a common observation describing a discrepancy between the stated privacy concerns by people and their behavior to safeguard it. Moreover, we reflect that participants considered only the added value of utility and entertainment when deciding whether or not to interact with the robot again, but not the privacy breach. We conclude that people might tolerate an untrustworthy robot even when they are feeling vulnerable in the everyday life situation of clothes shopping.
{"title":"Tolerating Untrustworthy Robots: Studying Human Vulnerability Experience within a Privacy Scenario for Trust in Robots","authors":"Glenda Hannibal, Anna Dobrosovestnova, A. Weiss","doi":"10.1109/RO-MAN53752.2022.9900830","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900830","url":null,"abstract":"Focusing on human experience of vulnerability in everyday life interaction scenarios is still a novel approach. So far, only a proof-of-concept online study has been conducted, and to extend this work, we present a follow-up online study. We consider in more detail how human experience of vulnerability caused by a trust violation through a privacy breach affects trust ratings in an interaction scenario with the PEPPER robot assisting with clothes shopping. We report the results from 32 survey responses and 11 semi-structured interviews. Our findings reveal the existence of the privacy paradox also for studying trust in HRI, which is a common observation describing a discrepancy between the stated privacy concerns by people and their behavior to safeguard it. Moreover, we reflect that participants considered only the added value of utility and entertainment when deciding whether or not to interact with the robot again, but not the privacy breach. We conclude that people might tolerate an untrustworthy robot even when they are feeling vulnerable in the everyday life situation of clothes shopping.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122150672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900821
F. Camara, Charles W. Fox
Human-robot interaction (HRI) requires quantitative models of proxemics and trust for robots to use in negotiating with people for space. Hall’s theory of proxemics has been used for decades to describe social interaction distances but has lacked detailed quantitative models and generative explanations to apply to these cases. In the limited case of autonomous vehicle interactions with pedestrians crossing a road, a recent model has explained the quantitative sizes of Hall’s distances to 4% error and their links to the concept of trust in human interactions. The present study extends this model by generalising several of its assumptions to cover further cases including human-human and human-robot interactions. It tightens the explanations of Hall zones from 4% to 1% error and fits several more recent empirical HRI results. This may help to further unify these disparate fields and quantify them to a level which enables real-world operational HRI applications.
{"title":"Extending Quantitative Proxemics and Trust to HRI","authors":"F. Camara, Charles W. Fox","doi":"10.1109/RO-MAN53752.2022.9900821","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900821","url":null,"abstract":"Human-robot interaction (HRI) requires quantitative models of proxemics and trust for robots to use in negotiating with people for space. Hall’s theory of proxemics has been used for decades to describe social interaction distances but has lacked detailed quantitative models and generative explanations to apply to these cases. In the limited case of autonomous vehicle interactions with pedestrians crossing a road, a recent model has explained the quantitative sizes of Hall’s distances to 4% error and their links to the concept of trust in human interactions. The present study extends this model by generalising several of its assumptions to cover further cases including human-human and human-robot interactions. It tightens the explanations of Hall zones from 4% to 1% error and fits several more recent empirical HRI results. This may help to further unify these disparate fields and quantify them to a level which enables real-world operational HRI applications.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131334064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}