Pub Date : 2023-11-30DOI: 10.1007/s12369-023-01076-z
Guy Laban, Arvid Kappas, Val Morrison, Emily S. Cross
While interactions with social robots are novel and exciting for many people, one concern is the extent to which people’s behavioural and emotional engagement might be sustained across time, since during initial interactions with a robot, its novelty is especially salient. This challenge is particularly noteworthy when considering interactions designed to support people’s well-being, with limited evidence (or empirical exploration) of social robots’ capacity to support people’s emotional health over time. Accordingly, our aim here was to examine how long-term repeated interactions with a social robot affect people’s self-disclosure behaviour toward the robot, their perceptions of the robot, and how such sustained interactions influence factors related to well-being. We conducted a mediated long-term online experiment with participants conversing with the social robot Pepper 10 times over 5 weeks. We found that people self-disclose increasingly more to a social robot over time, and report the robot to be more social and competent over time. Participants’ moods also improved after talking to the robot, and across sessions, they found the robot’s responses increasingly comforting as well as reported feeling less lonely. Finally, our results emphasize that when the discussion frame was supposedly more emotional (in this case, framing questions in the context of the COVID-19 pandemic), participants reported feeling lonelier and more stressed. These results set the stage for situating social robots as conversational partners and provide crucial evidence for their potential inclusion in interventions supporting people’s emotional health through encouraging self-disclosure.
{"title":"Building Long-Term Human–Robot Relationships: Examining Disclosure, Perception and Well-Being Across Time","authors":"Guy Laban, Arvid Kappas, Val Morrison, Emily S. Cross","doi":"10.1007/s12369-023-01076-z","DOIUrl":"https://doi.org/10.1007/s12369-023-01076-z","url":null,"abstract":"<p>While interactions with social robots are novel and exciting for many people, one concern is the extent to which people’s behavioural and emotional engagement might be sustained across time, since during initial interactions with a robot, its novelty is especially salient. This challenge is particularly noteworthy when considering interactions designed to support people’s well-being, with limited evidence (or empirical exploration) of social robots’ capacity to support people’s emotional health over time. Accordingly, our aim here was to examine how long-term repeated interactions with a social robot affect people’s self-disclosure behaviour toward the robot, their perceptions of the robot, and how such sustained interactions influence factors related to well-being. We conducted a mediated long-term online experiment with participants conversing with the social robot Pepper 10 times over 5 weeks. We found that people self-disclose increasingly more to a social robot over time, and report the robot to be more social and competent over time. Participants’ moods also improved after talking to the robot, and across sessions, they found the robot’s responses increasingly comforting as well as reported feeling less lonely. Finally, our results emphasize that when the discussion frame was supposedly more emotional (in this case, framing questions in the context of the COVID-19 pandemic), participants reported feeling lonelier and more stressed. These results set the stage for situating social robots as conversational partners and provide crucial evidence for their potential inclusion in interventions supporting people’s emotional health through encouraging self-disclosure.\u0000</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"875 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138529968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-29DOI: 10.1007/s12369-023-01079-w
Ori Fartook, Karon MacLean, Tal Oron-Gilad, Jessica R. Cauchard
The field of human–drone interaction (HDI) has investigated an increasing number of applications for social drones, all while focusing on the drone’s inherent ability to fly, thus overpassing interaction opportunities, such as a drone in its perched (i.e., non-flying) state. A drone cannot constantly fly and a need for more realistic HDI is needed, therefore, in this exploratory work, we have decoupled a social drone’s flying state from its perched state and investigated user interpretations of its physical rendering. To do so, we designed and developed BiRDe: a Bodily expressIons and Respiration Drone conveying Emotions. BiRDe was designed to render a range of emotional states by modulating its respiratory rate (RR) and changing its body posture using reconfigurable wings and head positions. Following its design, a validation study was conducted. In a laboratory study, participants (({N}={30})) observed and labeled twelve of BiRDe’s emotional behaviors using Valence and Arousal based emotional states. We identified consistent patterns in how BiRDe’s RR, wings, and head had influenced perception in terms of valence, arousal, and willingness to interact. Furthermore, participants interpreted 11 out of the 12 behaviors in line with our initial design intentions. This work demonstrates a drone’s ability to communicate emotions even while perched and offers design implications and future applications.
{"title":"Expanding the Interaction Repertoire of a Social Drone: Physically Expressive Possibilities of a Perched BiRDe","authors":"Ori Fartook, Karon MacLean, Tal Oron-Gilad, Jessica R. Cauchard","doi":"10.1007/s12369-023-01079-w","DOIUrl":"https://doi.org/10.1007/s12369-023-01079-w","url":null,"abstract":"<p>The field of human–drone interaction (HDI) has investigated an increasing number of applications for social drones, all while focusing on the drone’s inherent ability to fly, thus overpassing interaction opportunities, such as a drone in its perched (i.e., non-flying) state. A drone cannot constantly fly and a need for more realistic HDI is needed, therefore, in this exploratory work, we have decoupled a social drone’s flying state from its perched state and investigated user interpretations of its physical rendering. To do so, we designed and developed BiRDe: a Bodily expressIons and Respiration Drone conveying Emotions. BiRDe was designed to render a range of emotional states by modulating its respiratory rate (RR) and changing its body posture using reconfigurable wings and head positions. Following its design, a validation study was conducted. In a laboratory study, participants (<span>({N}={30})</span>) observed and labeled twelve of BiRDe’s emotional behaviors using Valence and Arousal based emotional states. We identified consistent patterns in how BiRDe’s RR, wings, and head had influenced perception in terms of valence, arousal, and willingness to interact. Furthermore, participants interpreted 11 out of the 12 behaviors in line with our initial design intentions. This work demonstrates a drone’s ability to communicate emotions even while perched and offers design implications and future applications.\u0000</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"875 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138529980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.1007/s12369-023-01075-0
Jie Cai, Xiangyun Tang, Xudong Lu, Xurong Fu
The willingness to use service robots plays a pivotal role in human–robot interaction. To establish a valid measure in the Chinese context, this study aimed to revisit the validity and reliability of the Service Robot Integration Willingness (SRIW) Scale among Chinese adults. A total of 955 participants were recruited to complete the Chinese version of the SRIW. Our findings revealed a four-factor model comprising 31 items, indicating a strong model fit. Furthermore, trust in automation correlated positively with the Chinese SRIW, while negative attitudes toward robots exhibited a significant inverse correlation, supportting the Chinese SRIW’s substantial criterion-related validity. In conclusion, this article introduces an updated Chinese SRIW, underscoring its efficacy in measuring the readiness to adopt service robots in China.
{"title":"Psychometric Properties of the Chinese Version of Service Robot Integration Willingness (SRIW) Scale in the Chinese Sample of Adults","authors":"Jie Cai, Xiangyun Tang, Xudong Lu, Xurong Fu","doi":"10.1007/s12369-023-01075-0","DOIUrl":"https://doi.org/10.1007/s12369-023-01075-0","url":null,"abstract":"<p>The willingness to use service robots plays a pivotal role in human–robot interaction. To establish a valid measure in the Chinese context, this study aimed to revisit the validity and reliability of the Service Robot Integration Willingness (SRIW) Scale among Chinese adults. A total of 955 participants were recruited to complete the Chinese version of the SRIW. Our findings revealed a four-factor model comprising 31 items, indicating a strong model fit. Furthermore, trust in automation correlated positively with the Chinese SRIW, while negative attitudes toward robots exhibited a significant inverse correlation, supportting the Chinese SRIW’s substantial criterion-related validity. In conclusion, this article introduces an updated Chinese SRIW, underscoring its efficacy in measuring the readiness to adopt service robots in China.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"24 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138529981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-22DOI: 10.1007/s12369-023-01061-6
Giulia Perugia, Dominika Lisy
The discussion around gendering humanoid robots has gained more traction in the last few years. To lay the basis for a full comprehension of how robots’ “gender” has been understood within the Human–Robot Interaction (HRI) community—i.e., how it has been manipulated, in which contexts, and which effects it has yielded on people’s perceptions and interactions with robots—we performed a scoping review of the literature. We identified 553 papers relevant for our review retrieved from 5 different databases. The final sample of reviewed papers included 35 papers written between 2005 and 2021, which involved a total of 3902 participants. In this article, we thoroughly summarize these papers by reporting information about their objectives and assumptions on gender (i.e., definitions and reasons to manipulate gender), their manipulation of robots’ “gender” (i.e., gender cues and manipulation checks), their experimental designs (e.g., demographics of participants, employed robots), and their results (i.e., main and interaction effects). The review reveals that robots’ “gender” does not affect crucial constructs for the HRI, such as likability and acceptance, but rather bears its strongest effect on stereotyping. We leverage our different epistemological backgrounds in Social Robotics and Gender Studies to provide a comprehensive interdisciplinary perspective on the results of the review and suggest ways to move forward in the field of HRI.
{"title":"Robot’s Gendering Trouble: A Scoping Review of Gendering Humanoid Robots and Its Effects on HRI","authors":"Giulia Perugia, Dominika Lisy","doi":"10.1007/s12369-023-01061-6","DOIUrl":"https://doi.org/10.1007/s12369-023-01061-6","url":null,"abstract":"<p>The discussion around gendering humanoid robots has gained more traction in the last few years. To lay the basis for a full comprehension of how robots’ “gender” has been understood within the Human–Robot Interaction (HRI) community—i.e., how it has been manipulated, in which contexts, and which effects it has yielded on people’s perceptions and interactions with robots—we performed a scoping review of the literature. We identified 553 papers relevant for our review retrieved from 5 different databases. The final sample of reviewed papers included 35 papers written between 2005 and 2021, which involved a total of 3902 participants. In this article, we thoroughly summarize these papers by reporting information about their objectives and assumptions on gender (i.e., definitions and reasons to manipulate gender), their manipulation of robots’ “gender” (i.e., gender cues and manipulation checks), their experimental designs (e.g., demographics of participants, employed robots), and their results (i.e., main and interaction effects). The review reveals that robots’ “gender” does not affect crucial constructs for the HRI, such as likability and acceptance, but rather bears its strongest effect on stereotyping. We leverage our different epistemological backgrounds in Social Robotics and Gender Studies to provide a comprehensive interdisciplinary perspective on the results of the review and suggest ways to move forward in the field of HRI.\u0000</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"16 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138542174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-17DOI: 10.1007/s12369-023-01072-3
Jamy Li, Mohsen Ensafjoo
Modern developments in speech-enabled drones and aerial manipulation systems (AMS) enable drones to have social interactions with people, which is important for therapeutic applications involving flight and above-eye-level monitoring in people’s homes, but not everyone will accept drones into their daily lives. Consistently assessing who would accept a socially assistive drone into their home is a challenge for roboticists. An animation-based Mechanical Turk survey (N = 176) found that acceptance of a voice-enabled AMS for fatigue – i.e., physical or mental tiredness in the participant’s life – was higher among younger adults with higher education and longer symptoms of fatigue, suggesting demographics and a need for the task performed by the drone are critical factors for drone acceptance. Participants rated the drone as more acceptable for others than for themselves, demonstrating a self-other effect. A second video-based YouGov survey (N = 404) found that younger adults rated an AMS for managing the symptom of day-to-day fatigue as more acceptable than older adults. The self-other effect was reduced among participants who read a situation with specific versus general phrasing of the AMS’s imagined use, suggesting that it may be caused by an attribution bias. These results demonstrate how analyzing demographics and specifying the wording of technology use can more consistently assess to whom drones for fatigue are acceptable, which is of interest to public opinion researchers and roboticists.
{"title":"It’s Not UAV, It’s Me: Demographic and Self-Other Effects in Public Acceptance of a Socially Assistive Aerial Manipulation System for Fatigue Management","authors":"Jamy Li, Mohsen Ensafjoo","doi":"10.1007/s12369-023-01072-3","DOIUrl":"https://doi.org/10.1007/s12369-023-01072-3","url":null,"abstract":"<p>Modern developments in speech-enabled drones and aerial manipulation systems (AMS) enable drones to have social interactions with people, which is important for therapeutic applications involving flight and above-eye-level monitoring in people’s homes, but not everyone will accept drones into their daily lives. Consistently assessing who would accept a socially assistive drone into their home is a challenge for roboticists. An animation-based Mechanical Turk survey (<i>N</i> = 176) found that acceptance of a voice-enabled AMS for fatigue – i.e., physical or mental tiredness in the participant’s life – was higher among younger adults with higher education and longer symptoms of fatigue, suggesting demographics and a need for the task performed by the drone are critical factors for drone acceptance. Participants rated the drone as more acceptable for others than for themselves, demonstrating a self-other effect. A second video-based YouGov survey (<i>N</i> = 404) found that younger adults rated an AMS for managing the symptom of day-to-day fatigue as more acceptable than older adults. The self-other effect was reduced among participants who read a situation with specific versus general phrasing of the AMS’s imagined use, suggesting that it may be caused by an attribution bias. These results demonstrate how analyzing demographics and specifying the wording of technology use can more consistently assess to whom drones for fatigue are acceptable, which is of interest to public opinion researchers and roboticists.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"186 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138529982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-11DOI: 10.1007/s12369-023-01074-1
Silvia Rossi, Claudia Di Napoli, Federica Garramone, Elena Salvatore, Gabriella Santangelo
Abstract We performed a study to evaluate if the acceptance of a social humanoid robot used for monitoring the activities of elderly users with cognitive deficits increased after interacting with the robot. In addition, we evaluated if the robot’s acceptance is improved when the interaction with the robot occurred in different modalities modulated according to each user’s cognitive and personality profile. A group of 7 participants underwent assessment tools for cognitive and personality traits and for the level of acceptability of the robot. They interacted with the robot at their private home for a minimum of two weeks. The interaction with the robot occurred under two different modalities: standard modality where the robot performed tasks by approaching the subject at a fixed pre-defined frequency of interactions, and at fixed pre-defined times; modulated modality where the robot performed tasks by approaching the subject at different frequencies set according to some personality traits and cognitive profile of the user. The results showed no change in the acceptability level of the robot after direct interaction. Still, personality traits such as Neuroticism and Openness influenced the acceptability of the robot in the elderly only before an interaction. At the same time, these personality traits did not seem to influence the acceptability of the new technology after a direct interaction. Different is the case of cognitive profiles and demographic characteristics. Finally, the score on the pleasantness scale was higher when the interaction with the robot was set in modulated modality rather than standard modality. In conclusion, the identification of the personality traits and the cognitive status in the elderly with cognitive deficits seems to be useful to modulate the type and frequency of interaction of the robot with the user to increase the acceptability of the instrument and pleasures in every daily life.
{"title":"Personality-Based Adaptation of Robot Behaviour: Acceptability Results on Individuals with Cognitive Impairments","authors":"Silvia Rossi, Claudia Di Napoli, Federica Garramone, Elena Salvatore, Gabriella Santangelo","doi":"10.1007/s12369-023-01074-1","DOIUrl":"https://doi.org/10.1007/s12369-023-01074-1","url":null,"abstract":"Abstract We performed a study to evaluate if the acceptance of a social humanoid robot used for monitoring the activities of elderly users with cognitive deficits increased after interacting with the robot. In addition, we evaluated if the robot’s acceptance is improved when the interaction with the robot occurred in different modalities modulated according to each user’s cognitive and personality profile. A group of 7 participants underwent assessment tools for cognitive and personality traits and for the level of acceptability of the robot. They interacted with the robot at their private home for a minimum of two weeks. The interaction with the robot occurred under two different modalities: standard modality where the robot performed tasks by approaching the subject at a fixed pre-defined frequency of interactions, and at fixed pre-defined times; modulated modality where the robot performed tasks by approaching the subject at different frequencies set according to some personality traits and cognitive profile of the user. The results showed no change in the acceptability level of the robot after direct interaction. Still, personality traits such as Neuroticism and Openness influenced the acceptability of the robot in the elderly only before an interaction. At the same time, these personality traits did not seem to influence the acceptability of the new technology after a direct interaction. Different is the case of cognitive profiles and demographic characteristics. Finally, the score on the pleasantness scale was higher when the interaction with the robot was set in modulated modality rather than standard modality. In conclusion, the identification of the personality traits and the cognitive status in the elderly with cognitive deficits seems to be useful to modulate the type and frequency of interaction of the robot with the user to increase the acceptability of the instrument and pleasures in every daily life.","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"27 26","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135043010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-11DOI: 10.1007/s12369-023-01071-4
Bingxin Xue, Ming Gao, Chaoqun Wang, Yao Cheng, Fengyu Zhou
{"title":"Crowd-Aware Socially Compliant Robot Navigation via Deep Reinforcement Learning","authors":"Bingxin Xue, Ming Gao, Chaoqun Wang, Yao Cheng, Fengyu Zhou","doi":"10.1007/s12369-023-01071-4","DOIUrl":"https://doi.org/10.1007/s12369-023-01071-4","url":null,"abstract":"","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135042045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-10DOI: 10.1007/s12369-023-01073-2
Cecilia Roselli, Uma Prashant Navare, Francesca Ciardo, Agnieszka Wykowska
Abstract Research has shown that, under certain circumstances, people can adopt the Intentional Stance towards robots and thus treat them as intentional agents. Previous evidence showed that there are factors at play in modulating the Intentional Stance, for example individuals’ years of education. In the present study, we aimed at investigating whether, given the same years of education, participants’ type of formal education- in terms of theoretical background- affected their adoption of the Intentional Stance. To do so, we recruited two samples of participants varying in their type of formal education, namely, a sample of participants comprised individuals with a background in robotics, whereas the other comprised individuals with a background in psychotherapy. To measure their likelihood of adopting the Intentional Stance, we asked them to complete the InStance Test (IST). To do it at the neural level, we recorded their neural activity during a resting state via electroencephalography (EEG). Results showed that therapists attributed higher IST scores of intentionality to the robot than roboticists, i.e., they were more likely to attribute Intentional Stance to explain robot’s behaviour. This result was mirrored by participants’ EEG neural activity during resting state, as we found higher power in the gamma frequency range (associated with mentalizing and the adoption of Intentional Stance) for therapists compared to roboticists. Therefore, we conclude that the type of education that promotes mentalizing skills increases the likelihood of attributing intentionality to robots.
{"title":"Type of Education Affects Individuals’ Adoption of Intentional Stance Towards Robots: An EEG Study","authors":"Cecilia Roselli, Uma Prashant Navare, Francesca Ciardo, Agnieszka Wykowska","doi":"10.1007/s12369-023-01073-2","DOIUrl":"https://doi.org/10.1007/s12369-023-01073-2","url":null,"abstract":"Abstract Research has shown that, under certain circumstances, people can adopt the Intentional Stance towards robots and thus treat them as intentional agents. Previous evidence showed that there are factors at play in modulating the Intentional Stance, for example individuals’ years of education. In the present study, we aimed at investigating whether, given the same years of education, participants’ type of formal education- in terms of theoretical background- affected their adoption of the Intentional Stance. To do so, we recruited two samples of participants varying in their type of formal education, namely, a sample of participants comprised individuals with a background in robotics, whereas the other comprised individuals with a background in psychotherapy. To measure their likelihood of adopting the Intentional Stance, we asked them to complete the InStance Test (IST). To do it at the neural level, we recorded their neural activity during a resting state via electroencephalography (EEG). Results showed that therapists attributed higher IST scores of intentionality to the robot than roboticists, i.e., they were more likely to attribute Intentional Stance to explain robot’s behaviour. This result was mirrored by participants’ EEG neural activity during resting state, as we found higher power in the gamma frequency range (associated with mentalizing and the adoption of Intentional Stance) for therapists compared to roboticists. Therefore, we conclude that the type of education that promotes mentalizing skills increases the likelihood of attributing intentionality to robots.","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"120 43","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135136366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-07DOI: 10.1007/s12369-023-01078-x
Giulia Perugia, Katie Winkle, Dominika Lisy
{"title":"Editorial Special Issue Special Issue on GENDERING ROBOTS (GenR): Ongoing (Re)Configurations of Gender in Robotics","authors":"Giulia Perugia, Katie Winkle, Dominika Lisy","doi":"10.1007/s12369-023-01078-x","DOIUrl":"https://doi.org/10.1007/s12369-023-01078-x","url":null,"abstract":"","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"288 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135475477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-06DOI: 10.1007/s12369-023-01065-2
Massimiliano L. Cappuccio, Jai C. Galliott, Friederike Eyssel, Alessandro Lanteri
Abstract We introduce the notion of Tolerance for autonomous artificial agents (and its antithetical concept, Intolerance ), motivating its theoretical adoption in the fields of social robotics and human—agent interaction, where it can effectively complement two contiguous, but essentially distinct, constructs— Acceptance and Trust— that are broadly used by researchers. We offer a comprehensive conceptual model of Tolerance, construed as a user’s insusceptibility or resilience to Autonomy Estrangement (i.e., the uncanny sense of isolation and displacement experienced by the humans who believe, for right or wrong reasons, that robots can subvert and/or control their lives). We also refer to Intolerance to indicate the opposite property, that is the user’s susceptibility or proneness to Autonomy Estrangement. Thus, Tolerance and Intolerance are inverse representations of the same phenomenological continuum, with Intolerance increasing when Tolerance decreases and vice versa. While Acceptance and Trust measure how the user’s interaction with a particular robot is satisfying and efficacious, the dyad Tolerance/Intolerance reflects how the user’s attitude is affected by deeply held normative beliefs about robots in general. So defined, a low Tolerance (that is a high Intolerance) is expected to correlate to antagonistic responses toward the prospect of adoption: specifically, Intolerant attitudes predict the kind of anxious and hostile behaviours toward Agents that originate from the concerns that autonomous systems could deeply disrupt the lives of humans (affecting their work cultures, ways of living, systems of values, etc.) or dominate them (making humans redundant, undermining their authority, threatening their uniqueness, etc.). Thus, Negative beliefs and worldviews about Agents are the cause of the Intolerant attitude toward Agents, which predicts Autonomy Estrangement, which in turn correlates to low Adoption Propensity and avoidance and rejection behaviours.
{"title":"Autonomous Systems and Technology Resistance: New Tools for Monitoring Acceptance, Trust, and Tolerance","authors":"Massimiliano L. Cappuccio, Jai C. Galliott, Friederike Eyssel, Alessandro Lanteri","doi":"10.1007/s12369-023-01065-2","DOIUrl":"https://doi.org/10.1007/s12369-023-01065-2","url":null,"abstract":"Abstract We introduce the notion of Tolerance for autonomous artificial agents (and its antithetical concept, Intolerance ), motivating its theoretical adoption in the fields of social robotics and human—agent interaction, where it can effectively complement two contiguous, but essentially distinct, constructs— Acceptance and Trust— that are broadly used by researchers. We offer a comprehensive conceptual model of Tolerance, construed as a user’s insusceptibility or resilience to Autonomy Estrangement (i.e., the uncanny sense of isolation and displacement experienced by the humans who believe, for right or wrong reasons, that robots can subvert and/or control their lives). We also refer to Intolerance to indicate the opposite property, that is the user’s susceptibility or proneness to Autonomy Estrangement. Thus, Tolerance and Intolerance are inverse representations of the same phenomenological continuum, with Intolerance increasing when Tolerance decreases and vice versa. While Acceptance and Trust measure how the user’s interaction with a particular robot is satisfying and efficacious, the dyad Tolerance/Intolerance reflects how the user’s attitude is affected by deeply held normative beliefs about robots in general. So defined, a low Tolerance (that is a high Intolerance) is expected to correlate to antagonistic responses toward the prospect of adoption: specifically, Intolerant attitudes predict the kind of anxious and hostile behaviours toward Agents that originate from the concerns that autonomous systems could deeply disrupt the lives of humans (affecting their work cultures, ways of living, systems of values, etc.) or dominate them (making humans redundant, undermining their authority, threatening their uniqueness, etc.). Thus, Negative beliefs and worldviews about Agents are the cause of the Intolerant attitude toward Agents, which predicts Autonomy Estrangement, which in turn correlates to low Adoption Propensity and avoidance and rejection behaviours.","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"181 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135679373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}