S. Laniel, D. Létourneau, François Grondin, M. Labbé, François Ferland, F. Michaud
Abstract In health care, a telepresence robot could be used to have a clinician or a caregiver assist seniors in their homes, without having to travel to these locations. However, the usability of these platforms for such applications requires that they can navigate and interact with a certain level of autonomy. For instance, robots should be able to go to their charging station in case of low energy level or telecommunication failure. The remote operator could be assisted by the robot’s capabilities to navigate safely at home and to follow and track people with whom to interact. This requires the integration of autonomous decision-making capabilities on a platform equipped with appropriate sensing and action modalities, which are validated out in the laboratory and in real homes. To document and study these translational issues, this article presents such integration on a Beam telepresence platform using three open-source libraries for integrated robot control architecture, autonomous navigation and sound processing, developed with real-time, limited processing and robustness requirements, so that they can work in real-life settings. Validation of the resulting platform, named SAM, is presented based on the trials carried out in 10 homes. Observations made provide guidance on what to improve and will help identify interaction scenarios for the upcoming usability studies with seniors, clinicians and caregivers.
{"title":"Toward enhancing the autonomy of a telepresence mobile robot for remote home care assistance","authors":"S. Laniel, D. Létourneau, François Grondin, M. Labbé, François Ferland, F. Michaud","doi":"10.1515/pjbr-2021-0016","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0016","url":null,"abstract":"Abstract In health care, a telepresence robot could be used to have a clinician or a caregiver assist seniors in their homes, without having to travel to these locations. However, the usability of these platforms for such applications requires that they can navigate and interact with a certain level of autonomy. For instance, robots should be able to go to their charging station in case of low energy level or telecommunication failure. The remote operator could be assisted by the robot’s capabilities to navigate safely at home and to follow and track people with whom to interact. This requires the integration of autonomous decision-making capabilities on a platform equipped with appropriate sensing and action modalities, which are validated out in the laboratory and in real homes. To document and study these translational issues, this article presents such integration on a Beam telepresence platform using three open-source libraries for integrated robot control architecture, autonomous navigation and sound processing, developed with real-time, limited processing and robustness requirements, so that they can work in real-life settings. Validation of the resulting platform, named SAM, is presented based on the trials carried out in 10 homes. Observations made provide guidance on what to improve and will help identify interaction scenarios for the upcoming usability studies with seniors, clinicians and caregivers.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"40 1","pages":"214 - 237"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76491379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Koay, M. Webster, C. Dixon, Paul Gainer, D. Syrdal, Michael Fisher, K. Dautenhahn
Abstract When studying the use of assistive robots in home environments, and especially how such robots can be personalised to meet the needs of the resident, key concerns are issues related to behaviour verification, behaviour interference and safety. Here, personalisation refers to the teaching of new robot behaviours by both technical and non-technical end users. In this article, we consider the issue of behaviour interference caused by situations where newly taught robot behaviours may affect or be affected by existing behaviours and thus, those behaviours will not or might not ever be executed. We focus in particular on how such situations can be detected and presented to the user. We describe the human–robot behaviour teaching system that we developed as well as the formal behaviour checking methods used. The online use of behaviour checking is demonstrated, based on static analysis of behaviours during the operation of the robot, and evaluated in a user study. We conducted a proof-of-concept human–robot interaction study with an autonomous, multi-purpose robot operating within a smart home environment. Twenty participants individually taught the robot behaviours according to instructions they were given, some of which caused interference with other behaviours. A mechanism for detecting behaviour interference provided feedback to participants and suggestions on how to resolve those conflicts. We assessed the participants’ views on detected interference as reported by the behaviour teaching system. Results indicate that interference warnings given to participants during teaching provoked an understanding of the issue. We did not find a significant influence of participants’ technical background. These results highlight a promising path towards verification and validation of assistive home companion robots that allow end-user personalisation.
{"title":"Use and usability of software verification methods to detect behaviour interference when teaching an assistive home companion robot: A proof-of-concept study","authors":"K. Koay, M. Webster, C. Dixon, Paul Gainer, D. Syrdal, Michael Fisher, K. Dautenhahn","doi":"10.1515/pjbr-2021-0028","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0028","url":null,"abstract":"Abstract When studying the use of assistive robots in home environments, and especially how such robots can be personalised to meet the needs of the resident, key concerns are issues related to behaviour verification, behaviour interference and safety. Here, personalisation refers to the teaching of new robot behaviours by both technical and non-technical end users. In this article, we consider the issue of behaviour interference caused by situations where newly taught robot behaviours may affect or be affected by existing behaviours and thus, those behaviours will not or might not ever be executed. We focus in particular on how such situations can be detected and presented to the user. We describe the human–robot behaviour teaching system that we developed as well as the formal behaviour checking methods used. The online use of behaviour checking is demonstrated, based on static analysis of behaviours during the operation of the robot, and evaluated in a user study. We conducted a proof-of-concept human–robot interaction study with an autonomous, multi-purpose robot operating within a smart home environment. Twenty participants individually taught the robot behaviours according to instructions they were given, some of which caused interference with other behaviours. A mechanism for detecting behaviour interference provided feedback to participants and suggestions on how to resolve those conflicts. We assessed the participants’ views on detected interference as reported by the behaviour teaching system. Results indicate that interference warnings given to participants during teaching provoked an understanding of the issue. We did not find a significant influence of participants’ technical background. These results highlight a promising path towards verification and validation of assistive home companion robots that allow end-user personalisation.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"262 1","pages":"402 - 422"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76508610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Sirithunge, G. Porawagamage, Nikolas Dahn, A. Jayasekara, C. D. Pathiranage
Abstract Artificial agents can uplift the living standards of domestic population considerably. One hindrance for this is that the robot is less competent to perceive complex human behaviors. With such perceptive skills in the robot, nonexpert users will find it easier to cope with their robot companion with less and less instructions to follow. Perception of the internal state of a user or “user situation” before interaction is crucial in this regard. There are a variety of factors that affect this user situation. Out of these, posture becomes prominent in displaying the emotional state of a person. This article presents a novel approach to identify diverse human postures often encountered in domestic environments and how a robot could assess its user’s emotional state of mind before an interaction based on postures. Therefore, the robot evaluates posture and the overall postural behavior of its user throughout the period of observation before initiating an interaction with its user. Aforementioned user evaluation is nonverbal and decisions are made through observation as well. We introduced a variable called “valence” to measure how “relaxed” or “stressed” a user is, in a certain encounter. The robot decides upon an appropriate approach behavior accordingly. Furthermore, the proposed concept was capable of recognizing both arm and body postures and both postural behaviors over time. This leads to an interaction initiated by robot itself in a favorable situation so that the scenario looks more intelligent. Hence more humanlike. The system has been implemented, and experiments have been conducted on an assistive robot placed in an artificially created domestic environment. Results of the experiments have been used to validate the proposed concept and critical observations are discussed.
{"title":"Recognition of arm and body postures as social cues for proactive HRI","authors":"H. Sirithunge, G. Porawagamage, Nikolas Dahn, A. Jayasekara, C. D. Pathiranage","doi":"10.1515/pjbr-2021-0030","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0030","url":null,"abstract":"Abstract Artificial agents can uplift the living standards of domestic population considerably. One hindrance for this is that the robot is less competent to perceive complex human behaviors. With such perceptive skills in the robot, nonexpert users will find it easier to cope with their robot companion with less and less instructions to follow. Perception of the internal state of a user or “user situation” before interaction is crucial in this regard. There are a variety of factors that affect this user situation. Out of these, posture becomes prominent in displaying the emotional state of a person. This article presents a novel approach to identify diverse human postures often encountered in domestic environments and how a robot could assess its user’s emotional state of mind before an interaction based on postures. Therefore, the robot evaluates posture and the overall postural behavior of its user throughout the period of observation before initiating an interaction with its user. Aforementioned user evaluation is nonverbal and decisions are made through observation as well. We introduced a variable called “valence” to measure how “relaxed” or “stressed” a user is, in a certain encounter. The robot decides upon an appropriate approach behavior accordingly. Furthermore, the proposed concept was capable of recognizing both arm and body postures and both postural behaviors over time. This leads to an interaction initiated by robot itself in a favorable situation so that the scenario looks more intelligent. Hence more humanlike. The system has been implemented, and experiments have been conducted on an assistive robot placed in an artificially created domestic environment. Results of the experiments have been used to validate the proposed concept and critical observations are discussed.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"617 1","pages":"503 - 522"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73920238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The increasing presence of computers in society calls for the need to better understand how differently the sociocognitive mechanisms involved in natural human relationships operate in human–robot interactions. In the present study, we investigated one fundamental aspect often neglected in the literatures on psychology and educational sciences: how the source of information, either human or computer, influences its perceived reliability and modulates cognitive and motivational processes. In Experiment 1, participants performed a reasoning task that presented cues following participants’ errors, helping them to succeed in the task. Using two levels of task difficulty, we manipulated the source of the cues as either a human or a computer. In addition to task accuracy, Experiment 2 assessed the impact of the information source on socially and nonsocially related dimensions of achievement goals. In Experiment 1, participants who believed that they received cues from a human teacher performed better on difficult trials compared to those who believed that they received cues from a computer. In Experiment 2, we replicated these findings by additionally showing that the nature of the source only had an impact on the socially related dimension of achievement goals, which in turn mediated the source’s effect on reasoning performance. For the first time, the present study showed modulations of cognitive and motivational processes resulting from the manipulation of the type of information source aimed at providing assistance with a reasoning task. The findings highlight the importance of considering the social and motivational aspects involved in human–computer interactions.
{"title":"Human vs computer: What effect does the source of information have on cognitive performance and achievement goal orientation?","authors":"Nicolas Spatola, J. Chevalère, Rebecca Lazarides","doi":"10.1515/pjbr-2021-0012","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0012","url":null,"abstract":"Abstract The increasing presence of computers in society calls for the need to better understand how differently the sociocognitive mechanisms involved in natural human relationships operate in human–robot interactions. In the present study, we investigated one fundamental aspect often neglected in the literatures on psychology and educational sciences: how the source of information, either human or computer, influences its perceived reliability and modulates cognitive and motivational processes. In Experiment 1, participants performed a reasoning task that presented cues following participants’ errors, helping them to succeed in the task. Using two levels of task difficulty, we manipulated the source of the cues as either a human or a computer. In addition to task accuracy, Experiment 2 assessed the impact of the information source on socially and nonsocially related dimensions of achievement goals. In Experiment 1, participants who believed that they received cues from a human teacher performed better on difficult trials compared to those who believed that they received cues from a computer. In Experiment 2, we replicated these findings by additionally showing that the nature of the source only had an impact on the socially related dimension of achievement goals, which in turn mediated the source’s effect on reasoning performance. For the first time, the present study showed modulations of cognitive and motivational processes resulting from the manipulation of the type of information source aimed at providing assistance with a reasoning task. The findings highlight the importance of considering the social and motivational aspects involved in human–computer interactions.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"3 1","pages":"175 - 186"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81904462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Privacy is an essential topic in (social) robotics and becomes even more important when considering interactive and autonomous robots within the domestic environment. Robots will collect a lot of personal and sensitive information about the users and their environment. Thereby, privacy does consider the topic of (cyber-)security and the protection of information against misuse by involved service providers. So far, the main focus relies on theoretical concepts to propose privacy principles for robots. This article provides a privacy framework as a feasible approach to consider security and privacy issues as a basis. Thereby, the proposed privacy framework is put in the context of a user-centered design approach to highlight the correlation between the design process steps and the steps of the privacy framework. Furthermore, this article introduces feasible privacy methodologies for privacy-enhancing development to simplify the risk assessment and meet the privacy principles. Even though user participation plays an essential role in robot development, this is not the focus of this article. Even though user participation plays an essential role in robot development, this is not the focus of this article. The employed privacy methodologies are showcased in a use case of a robot as an interaction partner contrasting two different use case scenarios to encourage the importance of context awareness.
{"title":"Privacy framework for context-aware robot development","authors":"Tanja Heuer, Ina Schiering, R. Gerndt","doi":"10.1515/pjbr-2021-0032","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0032","url":null,"abstract":"Abstract Privacy is an essential topic in (social) robotics and becomes even more important when considering interactive and autonomous robots within the domestic environment. Robots will collect a lot of personal and sensitive information about the users and their environment. Thereby, privacy does consider the topic of (cyber-)security and the protection of information against misuse by involved service providers. So far, the main focus relies on theoretical concepts to propose privacy principles for robots. This article provides a privacy framework as a feasible approach to consider security and privacy issues as a basis. Thereby, the proposed privacy framework is put in the context of a user-centered design approach to highlight the correlation between the design process steps and the steps of the privacy framework. Furthermore, this article introduces feasible privacy methodologies for privacy-enhancing development to simplify the risk assessment and meet the privacy principles. Even though user participation plays an essential role in robot development, this is not the focus of this article. Even though user participation plays an essential role in robot development, this is not the focus of this article. The employed privacy methodologies are showcased in a use case of a robot as an interaction partner contrasting two different use case scenarios to encourage the importance of context awareness.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"5 1","pages":"468 - 480"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82086098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Chevalier, Valentina Vasco, C. Willemse, Davide De Tommaso, V. Tikhanoff, U. Pattacini, A. Wykowska
Abstract We investigated the influence of visual sensitivity on the performance of an imitation task with the robot R1 in its virtual and physical forms. Virtual and physical embodiments offer different sensory experience to the users. As all individuals respond differently to their sensory environment, their sensory sensitivity may play a role in the interaction with a robot. Investigating how sensory sensitivity can influence the interactions appears to be a helpful tool to evaluate and design such interactions. Here we asked 16 participants to perform an imitation task, with a virtual and a physical robot under conditions of full and occluded visibility, and to report the strategy they used to perform this task. We asked them to complete the Sensory Perception Quotient questionnaire. Sensory sensitivity in vision predicted the participants’ performance in imitating the robot’s upper limb movements. From the self-report questionnaire, we observed that the participants relied more on visual sensory cues to perform the task with the physical robot than on the virtual robot. From these results, we propose that a physical embodiment enables the user to invest a lower cognitive effort when performing an imitation task over a virtual embodiment. The results presented here are encouraging that following this line of research is suitable to improve and evaluate the effects of the physical and virtual embodiment of robots for applications in healthy and clinical settings.
{"title":"Upper limb exercise with physical and virtual robots: Visual sensitivity affects task performance","authors":"P. Chevalier, Valentina Vasco, C. Willemse, Davide De Tommaso, V. Tikhanoff, U. Pattacini, A. Wykowska","doi":"10.1515/pjbr-2021-0014","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0014","url":null,"abstract":"Abstract We investigated the influence of visual sensitivity on the performance of an imitation task with the robot R1 in its virtual and physical forms. Virtual and physical embodiments offer different sensory experience to the users. As all individuals respond differently to their sensory environment, their sensory sensitivity may play a role in the interaction with a robot. Investigating how sensory sensitivity can influence the interactions appears to be a helpful tool to evaluate and design such interactions. Here we asked 16 participants to perform an imitation task, with a virtual and a physical robot under conditions of full and occluded visibility, and to report the strategy they used to perform this task. We asked them to complete the Sensory Perception Quotient questionnaire. Sensory sensitivity in vision predicted the participants’ performance in imitating the robot’s upper limb movements. From the self-report questionnaire, we observed that the participants relied more on visual sensory cues to perform the task with the physical robot than on the virtual robot. From these results, we propose that a physical embodiment enables the user to invest a lower cognitive effort when performing an imitation task over a virtual embodiment. The results presented here are encouraging that following this line of research is suitable to improve and evaluate the effects of the physical and virtual embodiment of robots for applications in healthy and clinical settings.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"24 1","pages":"199 - 213"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90063751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Wilkinson, Michael Gonzales, Patrick Hoey, David Kontak, Dian Wang, Noah Torname, Sam Laderoute, Zhao Han, Jordan Allspaw, Robert W. Platt, H. Yanco
Abstract The design of user interfaces (UIs) for assistive robot systems can be improved through the use of a set of design guidelines presented in this article. As an example, the article presents two different UI designs for an assistive manipulation robot system. We explore the design considerations from these two contrasting UIs. The first is referred to as the graphical user interface (GUI), which the user operates entirely through a touchscreen as a representation of the state of the art. The second is a type of novel UI referred to as the tangible user interface (TUI). The TUI makes use of devices in the real world, such as laser pointers and a projector–camera system that enables augmented reality. Each of these interfaces is designed to allow the system to be operated by an untrained user in an open environment such as a grocery store. Our goal is for these guidelines to aid researchers in the design of human–robot interaction for assistive robot systems, particularly when designing multiple interaction methods for direct comparison.
{"title":"Design guidelines for human–robot interaction with assistive robot manipulation systems","authors":"Alexander Wilkinson, Michael Gonzales, Patrick Hoey, David Kontak, Dian Wang, Noah Torname, Sam Laderoute, Zhao Han, Jordan Allspaw, Robert W. Platt, H. Yanco","doi":"10.1515/pjbr-2021-0023","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0023","url":null,"abstract":"Abstract The design of user interfaces (UIs) for assistive robot systems can be improved through the use of a set of design guidelines presented in this article. As an example, the article presents two different UI designs for an assistive manipulation robot system. We explore the design considerations from these two contrasting UIs. The first is referred to as the graphical user interface (GUI), which the user operates entirely through a touchscreen as a representation of the state of the art. The second is a type of novel UI referred to as the tangible user interface (TUI). The TUI makes use of devices in the real world, such as laser pointers and a projector–camera system that enables augmented reality. Each of these interfaces is designed to allow the system to be operated by an untrained user in an open environment such as a grocery store. Our goal is for these guidelines to aid researchers in the design of human–robot interaction for assistive robot systems, particularly when designing multiple interaction methods for direct comparison.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"1088 1","pages":"392 - 401"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88496654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Can we have personal robots without giving away personal data? Besides, what is the role of a robots Privacy Policy in that question? This work explores for the first time privacy in the context of consumer robotics through the lens of information communicated to users through Privacy Policies and Terms and Conditions. Privacy, personal and non-personal data are discussed under the light of the human–robot relationship, while we attempt to draw connections to dimensions related to personalization, trust, and transparency. We introduce a novel methodology to assess how the “Organization for Economic Cooperation and Development Guidelines Governing the Protection of Privacy and Trans-Border Flows of Personal Data” are reflected upon the publicly available Privacy Policies and Terms and Conditions in the consumer robotics field. We draw comparisons between the ways eight consumer robotic companies approach privacy principles. Current findings demonstrate significant deviations in the structure and context of privacy terms. Some practical dimensions in terms of improving the context and the format of privacy terms are discussed. The ultimate goal of this work is to raise awareness regarding the various privacy strategies used by robot companies while ultimately creating a usable way to make this information more relevant and accessible to users.
{"title":"Toward privacy-sensitive human–robot interaction: Privacy terms and human–data interaction in the personal robot era","authors":"A. Chatzimichali, Ross Harrison, D. Chrysostomou","doi":"10.1515/pjbr-2021-0013","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0013","url":null,"abstract":"Abstract Can we have personal robots without giving away personal data? Besides, what is the role of a robots Privacy Policy in that question? This work explores for the first time privacy in the context of consumer robotics through the lens of information communicated to users through Privacy Policies and Terms and Conditions. Privacy, personal and non-personal data are discussed under the light of the human–robot relationship, while we attempt to draw connections to dimensions related to personalization, trust, and transparency. We introduce a novel methodology to assess how the “Organization for Economic Cooperation and Development Guidelines Governing the Protection of Privacy and Trans-Border Flows of Personal Data” are reflected upon the publicly available Privacy Policies and Terms and Conditions in the consumer robotics field. We draw comparisons between the ways eight consumer robotic companies approach privacy principles. Current findings demonstrate significant deviations in the structure and context of privacy terms. Some practical dimensions in terms of improving the context and the format of privacy terms are discussed. The ultimate goal of this work is to raise awareness regarding the various privacy strategies used by robot companies while ultimately creating a usable way to make this information more relevant and accessible to users.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"26 1","pages":"160 - 174"},"PeriodicalIF":0.0,"publicationDate":"2020-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72760547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Burns, H. Seifi, Hyosang Lee, K. J. Kuchenbecker
Abstract Children with autism need innovative solutions that help them learn to master everyday experiences and cope with stressful situations. We propose that socially assistive robot companions could better understand and react to a child’s needs if they utilized tactile sensing. We examined the existing relevant literature to create an initial set of six tactile-perception requirements, and we then evaluated these requirements through interviews with 11 experienced autism specialists from a variety of backgrounds. Thematic analysis of the comments shared by the specialists revealed three overarching themes: the touch-seeking and touch-avoiding behavior of autistic children, their individual differences and customization needs, and the roles that a touch-perceiving robot could play in such interactions. Using the interview study feedback, we refined our initial list into seven qualitative requirements that describe robustness and maintainability , sensing range , feel , gesture identification , spatial , temporal , and adaptation attributes for the touch-perception system of a robot companion for children with autism. Finally, by utilizing the literature and current best practices in tactile sensor development and signal processing, we transformed these qualitative requirements into quantitative specifications. We discuss the implications of these requirements for future human–robot interaction research in the sensing, computing, and user research communities.
{"title":"Getting in touch with children with autism: Specialist guidelines for a touch-perceiving robot","authors":"R. Burns, H. Seifi, Hyosang Lee, K. J. Kuchenbecker","doi":"10.1515/pjbr-2021-0010","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0010","url":null,"abstract":"Abstract Children with autism need innovative solutions that help them learn to master everyday experiences and cope with stressful situations. We propose that socially assistive robot companions could better understand and react to a child’s needs if they utilized tactile sensing. We examined the existing relevant literature to create an initial set of six tactile-perception requirements, and we then evaluated these requirements through interviews with 11 experienced autism specialists from a variety of backgrounds. Thematic analysis of the comments shared by the specialists revealed three overarching themes: the touch-seeking and touch-avoiding behavior of autistic children, their individual differences and customization needs, and the roles that a touch-perceiving robot could play in such interactions. Using the interview study feedback, we refined our initial list into seven qualitative requirements that describe robustness and maintainability , sensing range , feel , gesture identification , spatial , temporal , and adaptation attributes for the touch-perception system of a robot companion for children with autism. Finally, by utilizing the literature and current best practices in tactile sensor development and signal processing, we transformed these qualitative requirements into quantitative specifications. We discuss the implications of these requirements for future human–robot interaction research in the sensing, computing, and user research communities.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":" 5","pages":"115 - 135"},"PeriodicalIF":0.0,"publicationDate":"2020-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/pjbr-2021-0010","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72380049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Sirithunge, Ravindu T. Bandara, A. Jayasekara, C. D. Pathiranage
Abstract Intelligent robot companions contribute significantly to improve the living standards of people in the modern society. Therefore, humanlike decision-making skills are sought after during the design of such robots. On the one hand, such features enable the robot to be easily handled by its human user. On the other hand, the robot will have the capability of dealing with humans without disturbing them by its behavior. Perception of Behavioral Ontology prior to an interaction is an important aspect in this regard. Furthermore, humans make an instant evaluation of task-related movements of others before approaching them. In this article, we present a mechanism to monitor how the activity space is utilized by a particular user on a temporal basis as an ontological assessment of the situation and then determine an appropriate approach behavior for a proactive robot to initiate an interaction with its user. This evaluation was then used to determine appropriate proxemic behavior to approach that person. The usage of activity space varies depending on the task of an individual. We used a probabilistic approach to find the areas that are the most and least likely to be occupied within the activity space of a particular individual during various tasks. As the robot approaches its subject after analyzing the spatial behavior of the subject within his/her activity space, spatial constraints occurred as a result of which robot’s movement could be demolished. Hence, a more socially acceptable spatial behavior could be observed from the robot. In other words, an etiquette based on approach behavior is derived considering the user’s activity space. Experiment results used to validate the system are presented, and critical observations during the study and implications are discussed.
{"title":"A probabilistic evaluation of human activity space for proactive approach behavior of a social robot","authors":"H. Sirithunge, Ravindu T. Bandara, A. Jayasekara, C. D. Pathiranage","doi":"10.1515/pjbr-2021-0006","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0006","url":null,"abstract":"Abstract Intelligent robot companions contribute significantly to improve the living standards of people in the modern society. Therefore, humanlike decision-making skills are sought after during the design of such robots. On the one hand, such features enable the robot to be easily handled by its human user. On the other hand, the robot will have the capability of dealing with humans without disturbing them by its behavior. Perception of Behavioral Ontology prior to an interaction is an important aspect in this regard. Furthermore, humans make an instant evaluation of task-related movements of others before approaching them. In this article, we present a mechanism to monitor how the activity space is utilized by a particular user on a temporal basis as an ontological assessment of the situation and then determine an appropriate approach behavior for a proactive robot to initiate an interaction with its user. This evaluation was then used to determine appropriate proxemic behavior to approach that person. The usage of activity space varies depending on the task of an individual. We used a probabilistic approach to find the areas that are the most and least likely to be occupied within the activity space of a particular individual during various tasks. As the robot approaches its subject after analyzing the spatial behavior of the subject within his/her activity space, spatial constraints occurred as a result of which robot’s movement could be demolished. Hence, a more socially acceptable spatial behavior could be observed from the robot. In other words, an etiquette based on approach behavior is derived considering the user’s activity space. Experiment results used to validate the system are presented, and critical observations during the study and implications are discussed.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"10 1","pages":"102 - 114"},"PeriodicalIF":0.0,"publicationDate":"2020-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84300704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}