Kim Baraka, Rebecca Beights, Marta Couto, Michael Radice
{"title":"Special issue on robots and autism: Conceptualization, technology, and methodology","authors":"Kim Baraka, Rebecca Beights, Marta Couto, Michael Radice","doi":"10.1515/pjbr-2021-0022","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0022","url":null,"abstract":"","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"88 1","pages":"297 - 298"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87229793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In this article, we investigate the role of causal reasoning in robotics research. Inspired by a categorization of human causal cognition, we propose a categorization of robot causal cognition. For each category, we identify related earlier work in robotics and also connect to research in other sciences. While the proposed categories mainly cover the sense–plan–act level of robotics, we also identify a number of higher-level aspects and areas of robotics research where causation plays an important role, for example, understandability, machine ethics, and robotics research methodology. Overall, we conclude that causation underlies several problem formulations in robotics, but it is still surprisingly absent in published research, in particular when it comes to explicit mentioning and using of causal concepts and terms. We discuss the reasons for, and consequences of, this and hope that this article clarifies the broad and deep connections between causal reasoning and robotics and also by pointing at the close connections to other research areas. At best, this will also contribute to a “causal revolution” in robotics.
{"title":"The relevance of causation in robotics: A review, categorization, and analysis","authors":"T. Hellström","doi":"10.1515/pjbr-2021-0017","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0017","url":null,"abstract":"Abstract In this article, we investigate the role of causal reasoning in robotics research. Inspired by a categorization of human causal cognition, we propose a categorization of robot causal cognition. For each category, we identify related earlier work in robotics and also connect to research in other sciences. While the proposed categories mainly cover the sense–plan–act level of robotics, we also identify a number of higher-level aspects and areas of robotics research where causation plays an important role, for example, understandability, machine ethics, and robotics research methodology. Overall, we conclude that causation underlies several problem formulations in robotics, but it is still surprisingly absent in published research, in particular when it comes to explicit mentioning and using of causal concepts and terms. We discuss the reasons for, and consequences of, this and hope that this article clarifies the broad and deep connections between causal reasoning and robotics and also by pointing at the close connections to other research areas. At best, this will also contribute to a “causal revolution” in robotics.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"54 1","pages":"238 - 255"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88474548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gabriella Cortellessa, Riccardo De Benedictis, Francesca Fracasso, Andrea Orlandini, A. Umbrico, A. Cesta
Abstract This article is a retrospective overview of work performed in the domain of Active Assisted Living over a span of almost 18 years. The authors have been creating and refining artificial intelligence (AI) and robotics solutions to support older adults in maintaining their independence and improving their quality of life. The goal of this article is to identify strong features and general lessons learned from those experiences and conceive guidelines and new research directions for future deployment, also relying on an analysis of similar research efforts. The work considers key points that have contributed to increase the success of the innovative solutions grounding them on known technology acceptance models. The analysis is presented with a threefold perspective: A Technological vision illustrates the characteristics of the support systems to operate in a real environment with continuity, robustness, and safety; a Socio-Health perspective highlights the role of experts in the socio-assistance domain to provide contextualized and personalized help based on actual people’s needs; finally, a Human dimension takes into account the personal aspects that influence the interaction with technology in the long term experience. The article promotes the crucial role of AI and robotics in ensuring intelligent and situated assistive behaviours. Finally, considering that the produced solutions are socio-technical systems, the article suggests a transdisciplinary approach in which different relevant disciplines merge together to have a complete, coordinated, and more informed vision of the problem.
{"title":"AI and robotics to help older adults: Revisiting projects in search of lessons learned","authors":"Gabriella Cortellessa, Riccardo De Benedictis, Francesca Fracasso, Andrea Orlandini, A. Umbrico, A. Cesta","doi":"10.1515/pjbr-2021-0025","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0025","url":null,"abstract":"Abstract This article is a retrospective overview of work performed in the domain of Active Assisted Living over a span of almost 18 years. The authors have been creating and refining artificial intelligence (AI) and robotics solutions to support older adults in maintaining their independence and improving their quality of life. The goal of this article is to identify strong features and general lessons learned from those experiences and conceive guidelines and new research directions for future deployment, also relying on an analysis of similar research efforts. The work considers key points that have contributed to increase the success of the innovative solutions grounding them on known technology acceptance models. The analysis is presented with a threefold perspective: A Technological vision illustrates the characteristics of the support systems to operate in a real environment with continuity, robustness, and safety; a Socio-Health perspective highlights the role of experts in the socio-assistance domain to provide contextualized and personalized help based on actual people’s needs; finally, a Human dimension takes into account the personal aspects that influence the interaction with technology in the long term experience. The article promotes the crucial role of AI and robotics in ensuring intelligent and situated assistive behaviours. Finally, considering that the produced solutions are socio-technical systems, the article suggests a transdisciplinary approach in which different relevant disciplines merge together to have a complete, coordinated, and more informed vision of the problem.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"170 1","pages":"356 - 378"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87665703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Increasingly, robots are decision makers in manufacturing, finance, medicine, and other areas, but the technology may not be trusted enough for reasons such as gaps between expectation and competency, challenges in explainable AI, users’ exposure level to the technology, etc. To investigate the trust issues between users and robots, the authors employed in this study, the case of robots making decisions in football (or “soccer” as it is known in the US) games as referees. More specifically, we presented a study on how the appearance of a human and three robotic linesmen (as presented in a study by Malle et al.) impacts fans’ trust and preference for them. Our online study with 104 participants finds a positive correlation between “Trust” and “Preference” for humanoid and human linesmen, but not for “AI” and “mechanical” linesmen. Although no significant trust differences were observed for different types of linesmen, participants do prefer human linesman to mechanical and humanoid linesmen. Our qualitative study further validated these quantitative findings by probing possible reasons for people’s preference: when the appearance of a linesman is not humanlike, people focus less on the trust issues but more on other reasons for their linesman preference such as efficiency, stability, and minimal robot design. These findings provide important insights for the design of trustworthy decision-making robots which are increasingly integrated to more and more aspects of our everyday lives.
{"title":"Are robots perceived as good decision makers? A study investigating trust and preference of robotic and human linesman-referees in football","authors":"Kaustav Das, Yixiao Wang, K. Green","doi":"10.1515/pjbr-2021-0020","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0020","url":null,"abstract":"Abstract Increasingly, robots are decision makers in manufacturing, finance, medicine, and other areas, but the technology may not be trusted enough for reasons such as gaps between expectation and competency, challenges in explainable AI, users’ exposure level to the technology, etc. To investigate the trust issues between users and robots, the authors employed in this study, the case of robots making decisions in football (or “soccer” as it is known in the US) games as referees. More specifically, we presented a study on how the appearance of a human and three robotic linesmen (as presented in a study by Malle et al.) impacts fans’ trust and preference for them. Our online study with 104 participants finds a positive correlation between “Trust” and “Preference” for humanoid and human linesmen, but not for “AI” and “mechanical” linesmen. Although no significant trust differences were observed for different types of linesmen, participants do prefer human linesman to mechanical and humanoid linesmen. Our qualitative study further validated these quantitative findings by probing possible reasons for people’s preference: when the appearance of a linesman is not humanlike, people focus less on the trust issues but more on other reasons for their linesman preference such as efficiency, stability, and minimal robot design. These findings provide important insights for the design of trustworthy decision-making robots which are increasingly integrated to more and more aspects of our everyday lives.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"52 1","pages":"287 - 296"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90947311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew Story, Cyril Jaksic, S. Fletcher, P. Webb, Gilbert Tang, Jonathan Carberry
Abstract Although the principles followed by modern standards for interaction between humans and robots follow the First Law of Robotics popularized in science fiction in the 1960s, the current standards regulating the interaction between humans and robots emphasize the importance of physical safety. However, they are less developed in another key dimension: psychological safety. As sales of industrial robots have been increasing over recent years, so has the frequency of human–robot interaction (HRI). The present article looks at the current safety guidelines for HRI in an industrial setting and assesses their suitability. This article then presents a means to improve current standards utilizing lessons learned from studies into human aware navigation (HAN), which has seen increasing use in mobile robotics. This article highlights limitations in current research, where the relationships established in mobile robotics have not been carried over to industrial robot arms. To understand this, it is necessary to focus less on how a robot arm avoids humans and more on how humans react when a robot is within the same space. Currently, the safety guidelines are behind the technological advance, however, with further studies aimed at understanding HRI and applying it to newly developed path finding and obstacle avoidance methods, science fiction can become science fact.
{"title":"Evaluating the use of human aware navigation in industrial robot arms","authors":"Matthew Story, Cyril Jaksic, S. Fletcher, P. Webb, Gilbert Tang, Jonathan Carberry","doi":"10.1515/pjbr-2021-0024","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0024","url":null,"abstract":"Abstract Although the principles followed by modern standards for interaction between humans and robots follow the First Law of Robotics popularized in science fiction in the 1960s, the current standards regulating the interaction between humans and robots emphasize the importance of physical safety. However, they are less developed in another key dimension: psychological safety. As sales of industrial robots have been increasing over recent years, so has the frequency of human–robot interaction (HRI). The present article looks at the current safety guidelines for HRI in an industrial setting and assesses their suitability. This article then presents a means to improve current standards utilizing lessons learned from studies into human aware navigation (HAN), which has seen increasing use in mobile robotics. This article highlights limitations in current research, where the relationships established in mobile robotics have not been carried over to industrial robot arms. To understand this, it is necessary to focus less on how a robot arm avoids humans and more on how humans react when a robot is within the same space. Currently, the safety guidelines are behind the technological advance, however, with further studies aimed at understanding HRI and applying it to newly developed path finding and obstacle avoidance methods, science fiction can become science fact.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"12 1","pages":"379 - 391"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89889686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Following ethnographic studies of Danish companies, this article examines how small- and medium-sized companies are implementing cobots into their manufacturing systems and considers how this is changing the practices of technicians and operators alike. It considers how this changes human values and has ethical consequences for the companies involved. By presenting a range of dilemmas arising during emergent processes, it raises questions about the extent to which ethics can be regulated and predetermined in processes of robot implementation and the resulting reconfiguration of work.
{"title":"Getting collaborative robots to work: A study of ethics emerging during the implementation of cobots","authors":"J. Wallace","doi":"10.1515/pjbr-2021-0019","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0019","url":null,"abstract":"Abstract Following ethnographic studies of Danish companies, this article examines how small- and medium-sized companies are implementing cobots into their manufacturing systems and considers how this is changing the practices of technicians and operators alike. It considers how this changes human values and has ethical consequences for the companies involved. By presenting a range of dilemmas arising during emergent processes, it raises questions about the extent to which ethics can be regulated and predetermined in processes of robot implementation and the resulting reconfiguration of work.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"51 1","pages":"299 - 309"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78860767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laurentiu A. Vasiliu, Keith Cortis, Ross McDermott, Aphra Kerr, Arne Peters, Marc Hesse, J. Hagemeyer, Tony Belpaeme, John McDonald, Rudi C. Villing, A. Mileo, Annalina Capulto, Michael Scriney, Sascha S. Griffiths, A. Koumpis, Brian Davis
Abstract This article explores the rapidly advancing innovation to endow robots with social intelligence capabilities in the form of multilingual and multimodal emotion recognition, and emotion-aware decision-making capabilities, for contextually appropriate robot behaviours and cooperative social human–robot interaction for the healthcare domain. The objective is to enable robots to become trustworthy and versatile social robots capable of having human-friendly and human assistive interactions, utilised to better assist human users’ needs by enabling the robot to sense, adapt, and respond appropriately to their requirements while taking into consideration their wider affective, motivational states, and behaviour. We propose an innovative approach to the difficult research challenge of endowing robots with social intelligence capabilities for human assistive interactions, going beyond the conventional robotic sense-think-act loop. We propose an architecture that addresses a wide range of social cooperation skills and features required for real human–robot social interaction, which includes language and vision analysis, dynamic emotional analysis (long-term affect and mood), semantic mapping to improve the robot’s knowledge of the local context, situational knowledge representation, and emotion-aware decision-making. Fundamental to this architecture is a normative ethical and social framework adapted to the specific challenges of robots engaging with caregivers and care-receivers.
{"title":"CASIE – Computing affect and social intelligence for healthcare in an ethical and trustworthy manner","authors":"Laurentiu A. Vasiliu, Keith Cortis, Ross McDermott, Aphra Kerr, Arne Peters, Marc Hesse, J. Hagemeyer, Tony Belpaeme, John McDonald, Rudi C. Villing, A. Mileo, Annalina Capulto, Michael Scriney, Sascha S. Griffiths, A. Koumpis, Brian Davis","doi":"10.1515/pjbr-2021-0026","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0026","url":null,"abstract":"Abstract This article explores the rapidly advancing innovation to endow robots with social intelligence capabilities in the form of multilingual and multimodal emotion recognition, and emotion-aware decision-making capabilities, for contextually appropriate robot behaviours and cooperative social human–robot interaction for the healthcare domain. The objective is to enable robots to become trustworthy and versatile social robots capable of having human-friendly and human assistive interactions, utilised to better assist human users’ needs by enabling the robot to sense, adapt, and respond appropriately to their requirements while taking into consideration their wider affective, motivational states, and behaviour. We propose an innovative approach to the difficult research challenge of endowing robots with social intelligence capabilities for human assistive interactions, going beyond the conventional robotic sense-think-act loop. We propose an architecture that addresses a wide range of social cooperation skills and features required for real human–robot social interaction, which includes language and vision analysis, dynamic emotional analysis (long-term affect and mood), semantic mapping to improve the robot’s knowledge of the local context, situational knowledge representation, and emotion-aware decision-making. Fundamental to this architecture is a normative ethical and social framework adapted to the specific challenges of robots engaging with caregivers and care-receivers.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"105 1","pages":"437 - 453"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80662422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Human beings are deeply social, and both evolutionary traits and cultural constructs encourage cooperation based on trust. Social robots interject themselves in human social settings, and they can be used for deceptive purposes. Robot deception is best understood by examining the effects of deception on the recipient of deceptive actions, and I argue that the long-term consequences of robot deception should receive more attention, as it has the potential to challenge human cultures of trust and degrade the foundations of human cooperation. In conclusion: regulation, ethical conduct by producers, and raised general awareness of the issues described in this article are all required to avoid the unfavourable consequences of a general degradation of trust.
{"title":"Social robot deception and the culture of trust","authors":"H. Sætra","doi":"10.1515/pjbr-2021-0021","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0021","url":null,"abstract":"Abstract Human beings are deeply social, and both evolutionary traits and cultural constructs encourage cooperation based on trust. Social robots interject themselves in human social settings, and they can be used for deceptive purposes. Robot deception is best understood by examining the effects of deception on the recipient of deceptive actions, and I argue that the long-term consequences of robot deception should receive more attention, as it has the potential to challenge human cultures of trust and degrade the foundations of human cooperation. In conclusion: regulation, ethical conduct by producers, and raised general awareness of the issues described in this article are all required to avoid the unfavourable consequences of a general degradation of trust.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"3 1","pages":"276 - 286"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74362190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Human–robot interaction (HRI) and game theory have developed distinct theories of trust for over three decades in relative isolation from one another. HRI has focused on the underlying dimensions, layers, correlates, and antecedents of trust models, while game theory has concentrated on the psychology and strategies behind singular trust decisions. Both fields have grappled to understand over-trust and trust calibration, as well as how to measure trust expectations, risk, and vulnerability. This article presents initial steps in closing the gap between these fields. By using insights and experimental findings from interdependence theory and social psychology, this work starts by analyzing a large game theory competition data set to demonstrate that the strongest predictors for a wide variety of human–human trust interactions are the interdependence-derived variables for commitment and trust that we have developed. It then presents a second study with human subject results for more realistic trust scenarios, involving both human–human and human–machine trust. In both the competition data and our experimental data, we demonstrate that the interdependence metrics better capture social “overtrust” than either rational or normative psychological reasoning, as proposed by game theory. This work further explores how interdependence theory – with its focus on commitment, coercion, and cooperation – addresses many of the proposed underlying constructs and antecedents within human–robot trust, shedding new light on key similarities and differences that arise when robots replace humans in trust interactions.
{"title":"Committing to interdependence: Implications from game theory for human–robot trust","authors":"Yosef Razin, K. Feigh","doi":"10.1515/pjbr-2021-0031","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0031","url":null,"abstract":"Abstract Human–robot interaction (HRI) and game theory have developed distinct theories of trust for over three decades in relative isolation from one another. HRI has focused on the underlying dimensions, layers, correlates, and antecedents of trust models, while game theory has concentrated on the psychology and strategies behind singular trust decisions. Both fields have grappled to understand over-trust and trust calibration, as well as how to measure trust expectations, risk, and vulnerability. This article presents initial steps in closing the gap between these fields. By using insights and experimental findings from interdependence theory and social psychology, this work starts by analyzing a large game theory competition data set to demonstrate that the strongest predictors for a wide variety of human–human trust interactions are the interdependence-derived variables for commitment and trust that we have developed. It then presents a second study with human subject results for more realistic trust scenarios, involving both human–human and human–machine trust. In both the competition data and our experimental data, we demonstrate that the interdependence metrics better capture social “overtrust” than either rational or normative psychological reasoning, as proposed by game theory. This work further explores how interdependence theory – with its focus on commitment, coercion, and cooperation – addresses many of the proposed underlying constructs and antecedents within human–robot trust, shedding new light on key similarities and differences that arise when robots replace humans in trust interactions.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"57 1","pages":"481 - 502"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80537026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Taheri, A. Shariati, Rozita Heidari, M. Shahab, M. Alemi, A. Meghdari
Abstract This article endeavors to present the impact of conducting robot-assisted music-based intervention sessions for children with low-functioning (LF) autism. To this end, a drum/xylophone playing robot is used to teach basic concepts of how to play the instruments to four participants with LF autism during nine educational sessions. The main findings of this study are compared to similar studies conducted with children with high-functioning autism. Our main findings indicated that the stereotyped behaviors of all the subjects decreased during the course of the program with an approximate large Cohen’s d effect size. Moreover, the children showed some improvement in imitation, joint attention, and social skills from the Pre-Test to Post-Test. In addition, regarding music education, we indicated that while the children could not pass a test on the music notes or reading music phrases items because of their cognitive deficits, they showed acceptable improvements (with a large Cohen’s d effect size) in the Stambak Rhythm Reproduction Test, which means that some rhythm learning occurred for the LF participants. In addition, we indicated that parenting stress levels decreased during the program. This study presents some potential possibilities of performing robot-assisted interventions for children with LF autism.
{"title":"Impacts of using a social robot to teach music to children with low-functioning autism","authors":"A. Taheri, A. Shariati, Rozita Heidari, M. Shahab, M. Alemi, A. Meghdari","doi":"10.1515/pjbr-2021-0018","DOIUrl":"https://doi.org/10.1515/pjbr-2021-0018","url":null,"abstract":"Abstract This article endeavors to present the impact of conducting robot-assisted music-based intervention sessions for children with low-functioning (LF) autism. To this end, a drum/xylophone playing robot is used to teach basic concepts of how to play the instruments to four participants with LF autism during nine educational sessions. The main findings of this study are compared to similar studies conducted with children with high-functioning autism. Our main findings indicated that the stereotyped behaviors of all the subjects decreased during the course of the program with an approximate large Cohen’s d effect size. Moreover, the children showed some improvement in imitation, joint attention, and social skills from the Pre-Test to Post-Test. In addition, regarding music education, we indicated that while the children could not pass a test on the music notes or reading music phrases items because of their cognitive deficits, they showed acceptable improvements (with a large Cohen’s d effect size) in the Stambak Rhythm Reproduction Test, which means that some rhythm learning occurred for the LF participants. In addition, we indicated that parenting stress levels decreased during the program. This study presents some potential possibilities of performing robot-assisted interventions for children with LF autism.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"136 1","pages":"256 - 275"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74262355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}