Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956382
Ker-Jiun Wang, C. Zheng
Wearable robot that constantly monitors, adapts and reacts to human’s need is a promising potential for technology to facilitate stress alleviation and contribute to mental health. Current means to help with mental health include counseling, drug medications, and relaxation techniques such as meditation or breathing exercises to improve mental status. The theory of human touch that causes the body to release hormone oxytocin to effectively alleviate anxiety shed light on a potential alternative to assist existing methods. Wearable robots that generate affective touch have the potential to improve social bonds and regulate emotion and cognitive functions. In this study, we used a wearable robotic tactile stimulation device, AffectNodes2, to mimic human affective touch. The touch-stimulated brain waves were captured from 4 EEG electrodes placed on the parietal, prefrontal and left and right temporal lobe regions of the brain. The novel Deep MSCNN with emotion polling structure had been developed to extract Affective touch, Non-affective touch and Relaxation stimuli with over 95% accuracy, which allows the robot to grasp the current human affective status. This sensing and decoding structure is our first step towards developing a self-adaptive robot to adjust its touch stimulation patterns to help regulate affective status.
{"title":"Toward a Wearable Affective Robot That Detects Human Emotions from Brain Signals by Using Deep Multi-Spectrogram Convolutional Neural Networks (Deep MS-CNN)","authors":"Ker-Jiun Wang, C. Zheng","doi":"10.1109/RO-MAN46459.2019.8956382","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956382","url":null,"abstract":"Wearable robot that constantly monitors, adapts and reacts to human’s need is a promising potential for technology to facilitate stress alleviation and contribute to mental health. Current means to help with mental health include counseling, drug medications, and relaxation techniques such as meditation or breathing exercises to improve mental status. The theory of human touch that causes the body to release hormone oxytocin to effectively alleviate anxiety shed light on a potential alternative to assist existing methods. Wearable robots that generate affective touch have the potential to improve social bonds and regulate emotion and cognitive functions. In this study, we used a wearable robotic tactile stimulation device, AffectNodes2, to mimic human affective touch. The touch-stimulated brain waves were captured from 4 EEG electrodes placed on the parietal, prefrontal and left and right temporal lobe regions of the brain. The novel Deep MSCNN with emotion polling structure had been developed to extract Affective touch, Non-affective touch and Relaxation stimuli with over 95% accuracy, which allows the robot to grasp the current human affective status. This sensing and decoding structure is our first step towards developing a self-adaptive robot to adjust its touch stimulation patterns to help regulate affective status.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124862618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956311
Nahla Tabti, Mohamad Kardofaki, S. Alfayad, Y. Chitour, F. Ouezdou, Eric Dychus
Research in the field of the powered orthoses or exoskeletons has expanded tremendously over the past years. Lower limb exoskeletons are widely used in robotic rehabilitation and are showing benefits in the patients quality of life. Many engineering reviews have been published about these devices and addressed general aspects. To the best of our knowledge, no review has minutely discussed specifically the control of the most common used devices, particularly the algorithms used to define the function state of the exoskeleton, such as walking, sit-to-stand, etc. In this contribution, the control hardware and software, as well as the integrated sensors for the feedback are thoroughly analyzed. We will also discuss the importance of user-specific state definition and customized control architecture. Although there are many prototypes developed nowadays, we chose to target medical lower limb exoskeletons that uses crutches to keep balance, and that are minimally actuated. These are the most common system that are now being commercialized and used worldwide. Therefore, the outcome of such a review helps to have a practical insight in all of: the mechatronics design, system architecture, and control technology.
{"title":"A Brief Review of the Electronics, Control System Architecture, and Human Interface for Commercial Lower Limb Medical Exoskeletons Stabilized by Aid of Crutches","authors":"Nahla Tabti, Mohamad Kardofaki, S. Alfayad, Y. Chitour, F. Ouezdou, Eric Dychus","doi":"10.1109/RO-MAN46459.2019.8956311","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956311","url":null,"abstract":"Research in the field of the powered orthoses or exoskeletons has expanded tremendously over the past years. Lower limb exoskeletons are widely used in robotic rehabilitation and are showing benefits in the patients quality of life. Many engineering reviews have been published about these devices and addressed general aspects. To the best of our knowledge, no review has minutely discussed specifically the control of the most common used devices, particularly the algorithms used to define the function state of the exoskeleton, such as walking, sit-to-stand, etc. In this contribution, the control hardware and software, as well as the integrated sensors for the feedback are thoroughly analyzed. We will also discuss the importance of user-specific state definition and customized control architecture. Although there are many prototypes developed nowadays, we chose to target medical lower limb exoskeletons that uses crutches to keep balance, and that are minimally actuated. These are the most common system that are now being commercialized and used worldwide. Therefore, the outcome of such a review helps to have a practical insight in all of: the mechatronics design, system architecture, and control technology.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125214129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956411
Masaki Hasegawa, Kotaro Hayashi, J. Miura
Currently, research and development of lifestyle support robots in daily life is being actively conducted. Health-case is one such function robots. In this research, we develop a fatigue estimation system using a camera that can easily be mounted on robots. Measurements taken in a real environment have to be consider noises caused by changes in light and the subject’s movement. This fatigue estimation system is based on a robust feature extraction method. As an indicator of fatigue, LF/HF-ratio was calculated from the power spectrum of RR interval in the electrocardiogram or the blood volume pulse (BVP). The BVP can be detected from the fingertip by using the photoplethysmography (PPG). In this study, we used a contactless PPG: remote-PPG (rPPG) detected by the luminance change of the face image. Some studies show facial expression features extracted from facial video are also useful for fatigue estimation. dimension reduction of past method using LLE spoiled the information in the large dimention of feature. We also developed a fatigue estimation method with such features using a camera for the healthcare robots. It used facial landmark points, line-of-sight vector, and size of the ellipse fitted with eyes and mouth landmark points. Therefore, proposed method simply use time-varying shape information of face like size of eyes, or gaze direction. We verified the performance of proposed features by the fatigue state classification using Support Vector Machine (SVM).
{"title":"Fatigue Estimation using Facial Expression features and Remote-PPG Signal","authors":"Masaki Hasegawa, Kotaro Hayashi, J. Miura","doi":"10.1109/RO-MAN46459.2019.8956411","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956411","url":null,"abstract":"Currently, research and development of lifestyle support robots in daily life is being actively conducted. Health-case is one such function robots. In this research, we develop a fatigue estimation system using a camera that can easily be mounted on robots. Measurements taken in a real environment have to be consider noises caused by changes in light and the subject’s movement. This fatigue estimation system is based on a robust feature extraction method. As an indicator of fatigue, LF/HF-ratio was calculated from the power spectrum of RR interval in the electrocardiogram or the blood volume pulse (BVP). The BVP can be detected from the fingertip by using the photoplethysmography (PPG). In this study, we used a contactless PPG: remote-PPG (rPPG) detected by the luminance change of the face image. Some studies show facial expression features extracted from facial video are also useful for fatigue estimation. dimension reduction of past method using LLE spoiled the information in the large dimention of feature. We also developed a fatigue estimation method with such features using a camera for the healthcare robots. It used facial landmark points, line-of-sight vector, and size of the ellipse fitted with eyes and mouth landmark points. Therefore, proposed method simply use time-varying shape information of face like size of eyes, or gaze direction. We verified the performance of proposed features by the fatigue state classification using Support Vector Machine (SVM).","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126186657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956400
Paulina Zguda, B. Sniezynski, B. Indurkhya, Anna Kolota, Mateusz Jarosz, Filip Sondej, Takamune Izui, Maria Dziok, A. Belowska, Wojciech Jędras, G. Venture
In child-robot interaction, the element of trust towards the robot is critical. This is particularly important the first time the child meets the robot, as the trust gained during this interaction can play a decisive role in future interactions. We present an in-the-wild study where Polish kindergartners interacted with a Pepper robot. The videos of this study were analyzed for the issues of trust, anthropomorphization, and reaction to malfunction, with the assumption that the last two factors influence the children’s trust towards Pepper. Our results reveal children’s interest in the robot performing tasks specific for humans, highlight the importance of the conversation scenario and the need for an extended library of answers provided by the robot about its abilities or origin and show how children tend to provoke the robot.
{"title":"On the Role of Trust in Child-Robot Interaction*","authors":"Paulina Zguda, B. Sniezynski, B. Indurkhya, Anna Kolota, Mateusz Jarosz, Filip Sondej, Takamune Izui, Maria Dziok, A. Belowska, Wojciech Jędras, G. Venture","doi":"10.1109/RO-MAN46459.2019.8956400","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956400","url":null,"abstract":"In child-robot interaction, the element of trust towards the robot is critical. This is particularly important the first time the child meets the robot, as the trust gained during this interaction can play a decisive role in future interactions. We present an in-the-wild study where Polish kindergartners interacted with a Pepper robot. The videos of this study were analyzed for the issues of trust, anthropomorphization, and reaction to malfunction, with the assumption that the last two factors influence the children’s trust towards Pepper. Our results reveal children’s interest in the robot performing tasks specific for humans, highlight the importance of the conversation scenario and the need for an extended library of answers provided by the robot about its abilities or origin and show how children tend to provoke the robot.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128887332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956256
N. Bowman, J. Banks
As social robots’ and AI agents’ roles are becoming more diverse, those machines increasingly function as sociable partners. This trend raises questions about whether social gaming gratifications known to emerge in human-human co-play may (not) also manifest in human-machine co-play. In the present study, we examined social outcomes of playing a videogame with a human partner as compared to an ostensible social robot or A.I (i.e., computer-controlled player) partner. Participants (N = 103) were randomly assigned to three experimental conditions in which they played a cooperative video game with either a human, embodied robot, or non-embodied AI. Results indicated that few statistically significant or meaningful differences existed between any of the partner types on perceived closeness with partner, relatedness need satisfaction, or entertainment outcomes. However, qualitative data suggested that human and robot partners were both seen as more sociable, while AI partners were seen as more functional.
{"title":"Social and Entertainment Gratifications of Videogame Play Comparing Robot, AI, and Human Partners","authors":"N. Bowman, J. Banks","doi":"10.1109/RO-MAN46459.2019.8956256","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956256","url":null,"abstract":"As social robots’ and AI agents’ roles are becoming more diverse, those machines increasingly function as sociable partners. This trend raises questions about whether social gaming gratifications known to emerge in human-human co-play may (not) also manifest in human-machine co-play. In the present study, we examined social outcomes of playing a videogame with a human partner as compared to an ostensible social robot or A.I (i.e., computer-controlled player) partner. Participants (N = 103) were randomly assigned to three experimental conditions in which they played a cooperative video game with either a human, embodied robot, or non-embodied AI. Results indicated that few statistically significant or meaningful differences existed between any of the partner types on perceived closeness with partner, relatedness need satisfaction, or entertainment outcomes. However, qualitative data suggested that human and robot partners were both seen as more sociable, while AI partners were seen as more functional.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130430523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956386
Richard J. Savery, R. Rose, Gil Weinberg
As human-robot collaboration opportunities continue to expand, trust becomes ever more important for full engagement and utilization of robots. Affective trust, built on emotional relationship and interpersonal bonds is particularly critical as it is more resilient to mistakes and increases the willingness to collaborate. In this paper we present a novel model built on music-driven emotional prosody and gestures that encourages the perception of a robotic identity, designed to avoid uncanny valley. Symbolic musical phrases were generated and tagged with emotional information by human musicians. These phrases controlled a synthesis engine playing back pre-rendered audio samples generated through interpolation of phonemes and electronic instruments. Gestures were also driven by the symbolic phrases, encoding the emotion from the musical phrase to low degree-of-freedom movements. Through a user study we showed that our system was able to accurately portray a range of emotions to the user. We also showed with a significant result that our non-linguistic audio generation achieved an 8% higher mean of average trust than using a state-of-the-art text-to-speech system.
{"title":"Establishing Human-Robot Trust through Music-Driven Robotic Emotion Prosody and Gesture","authors":"Richard J. Savery, R. Rose, Gil Weinberg","doi":"10.1109/RO-MAN46459.2019.8956386","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956386","url":null,"abstract":"As human-robot collaboration opportunities continue to expand, trust becomes ever more important for full engagement and utilization of robots. Affective trust, built on emotional relationship and interpersonal bonds is particularly critical as it is more resilient to mistakes and increases the willingness to collaborate. In this paper we present a novel model built on music-driven emotional prosody and gestures that encourages the perception of a robotic identity, designed to avoid uncanny valley. Symbolic musical phrases were generated and tagged with emotional information by human musicians. These phrases controlled a synthesis engine playing back pre-rendered audio samples generated through interpolation of phonemes and electronic instruments. Gestures were also driven by the symbolic phrases, encoding the emotion from the musical phrase to low degree-of-freedom movements. Through a user study we showed that our system was able to accurately portray a range of emotions to the user. We also showed with a significant result that our non-linguistic audio generation achieved an 8% higher mean of average trust than using a state-of-the-art text-to-speech system.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"225 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117181278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956305
Guillaume Sarthou, A. Clodic, R. Alami
In this paper we present Ontologenius, a semantic knowledge storage and reasoning framework for autonomous robots. More than a classic ontology software to query a knowledge base and a first-order internal logic as it can be done for web-semantics, we propose with Ontologenius features adapted to a robotic use including human-robot interaction. We introduce the ability to modify the knowledge base during execution, whether through dialogue or geometric reasoning, and keep these changes even after the robot is powered off. Since Ontologenius was developed to be used by a robot which interacts with humans, we have endowed the system with ability to perform attributes and properties generalization and with the possibility to model and estimate the semantic memory of a human partner and to implement theory of mind processes. This paper presents the architecture and the main features of Ontologenius as well as examples of its use in robotics applications.
{"title":"Ontologenius: A long-term semantic memory for robotic agents","authors":"Guillaume Sarthou, A. Clodic, R. Alami","doi":"10.1109/RO-MAN46459.2019.8956305","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956305","url":null,"abstract":"In this paper we present Ontologenius, a semantic knowledge storage and reasoning framework for autonomous robots. More than a classic ontology software to query a knowledge base and a first-order internal logic as it can be done for web-semantics, we propose with Ontologenius features adapted to a robotic use including human-robot interaction. We introduce the ability to modify the knowledge base during execution, whether through dialogue or geometric reasoning, and keep these changes even after the robot is powered off. Since Ontologenius was developed to be used by a robot which interacts with humans, we have endowed the system with ability to perform attributes and properties generalization and with the possibility to model and estimate the semantic memory of a human partner and to implement theory of mind processes. This paper presents the architecture and the main features of Ontologenius as well as examples of its use in robotics applications.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124512572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956385
Lauren Klein, L. Itti, Beth A. Smith, Marcelo R. Rosales, S. Nikolaidis, M. Matarić
Early intervention to address developmental disability in infants has the potential to promote improved outcomes in neurodevelopmental structure and function [1]. Researchers are starting to explore Socially Assistive Robotics (SAR) as a tool for delivering early interventions that are synergistic with and enhance human-administered therapy. For SAR to be effective, the robot must be able to consistently attract the attention of the infant in order to engage the infant in a desired activity. This work presents the analysis of eye gaze tracking data from five 6-8 month old infants interacting with a Nao robot that kicked its leg as a contingent reward for infant leg movement. We evaluate a Bayesian model of low-level surprise on video data from the infants’ head-mounted camera and on the timing of robot behaviors as a predictor of infant visual attention. The results demonstrate that over 67% of infant gaze locations were in areas the model evaluated to be more surprising than average. We also present an initial exploration using surprise to predict the extent to which the robot attracts infant visual attention during specific intervals in the study. This work is the first to validate the surprise model on infants; our results indicate the potential for using surprise to inform robot behaviors that attract infant attention during SAR interactions.
{"title":"Surprise! Predicting Infant Visual Attention in a Socially Assistive Robot Contingent Learning Paradigm","authors":"Lauren Klein, L. Itti, Beth A. Smith, Marcelo R. Rosales, S. Nikolaidis, M. Matarić","doi":"10.1109/RO-MAN46459.2019.8956385","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956385","url":null,"abstract":"Early intervention to address developmental disability in infants has the potential to promote improved outcomes in neurodevelopmental structure and function [1]. Researchers are starting to explore Socially Assistive Robotics (SAR) as a tool for delivering early interventions that are synergistic with and enhance human-administered therapy. For SAR to be effective, the robot must be able to consistently attract the attention of the infant in order to engage the infant in a desired activity. This work presents the analysis of eye gaze tracking data from five 6-8 month old infants interacting with a Nao robot that kicked its leg as a contingent reward for infant leg movement. We evaluate a Bayesian model of low-level surprise on video data from the infants’ head-mounted camera and on the timing of robot behaviors as a predictor of infant visual attention. The results demonstrate that over 67% of infant gaze locations were in areas the model evaluated to be more surprising than average. We also present an initial exploration using surprise to predict the extent to which the robot attracts infant visual attention during specific intervals in the study. This work is the first to validate the surprise model on infants; our results indicate the potential for using surprise to inform robot behaviors that attract infant attention during SAR interactions.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124252406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956301
Xinzhi Wang, Shengcheng Yuan, Hui Zhang, M. Lewis, K. Sycara
In recent years, there has been increasing interest in transparency in Deep Neural Networks. Most of the works on transparency have been done for image classification. In this paper, we report on work of transparency in Deep Reinforcement Learning Networks (DRLNs). Such networks have been extremely successful in learning action control in Atari games. In this paper, we focus on generating verbal (natural language) descriptions and explanations of deep reinforcement learning policies. Successful generation of verbal explanations would allow better understanding by people (e.g., users, debuggers) of the inner workings of DRLNs which could ultimately increase trust in these systems. We present a generation model which consists of three parts: an encoder on feature extraction, an attention structure on selecting features from the output of the encoder, and a decoder on generating the explanation in natural language. Four variants of the attention structure full attention, global attention, adaptive attention and object attention - are designed and compared. The adaptive attention structure performs the best among all the variants, even though the object attention structure is given additional information on object locations. Additionally, our experiment results showed that the proposed encoder outperforms two baseline encoders (Resnet and VGG) on the capability of distinguishing the game state images.
{"title":"Verbal Explanations for Deep Reinforcement Learning Neural Networks with Attention on Extracted Features","authors":"Xinzhi Wang, Shengcheng Yuan, Hui Zhang, M. Lewis, K. Sycara","doi":"10.1109/RO-MAN46459.2019.8956301","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956301","url":null,"abstract":"In recent years, there has been increasing interest in transparency in Deep Neural Networks. Most of the works on transparency have been done for image classification. In this paper, we report on work of transparency in Deep Reinforcement Learning Networks (DRLNs). Such networks have been extremely successful in learning action control in Atari games. In this paper, we focus on generating verbal (natural language) descriptions and explanations of deep reinforcement learning policies. Successful generation of verbal explanations would allow better understanding by people (e.g., users, debuggers) of the inner workings of DRLNs which could ultimately increase trust in these systems. We present a generation model which consists of three parts: an encoder on feature extraction, an attention structure on selecting features from the output of the encoder, and a decoder on generating the explanation in natural language. Four variants of the attention structure full attention, global attention, adaptive attention and object attention - are designed and compared. The adaptive attention structure performs the best among all the variants, even though the object attention structure is given additional information on object locations. Additionally, our experiment results showed that the proposed encoder outperforms two baseline encoders (Resnet and VGG) on the capability of distinguishing the game state images.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131375974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956468
Roxanna Pakkar, Caitlyn E. Clabaugh, Rhianna Lee, Eric Deng, M. Matarić
Socially assistive robotics (SAR) research has shown great potential for supplementing and augmenting therapy for children with autism spectrum disorders (ASD). However, the vast majority of SAR research has been limited to short-term studies in highly controlled environments. The design and development of a SAR system capable of interacting autonomously in situ for long periods of time involves many engineering and computing challenges. This paper presents the design of a fully autonomous SAR system for long-term, in-home use with children with ASD. We address design decisions based on robustness and adaptability needs, discuss the development of the robot’s character and interactions, and provide insights from the month-long, in-home data collections with children with ASD. This work contributes to a larger research program that is exploring how SAR can be used for enhancing the social and cognitive development of children with ASD.
{"title":"Designing a Socially Assistive Robot for Long-Term In-Home Use for Children with Autism Spectrum Disorders","authors":"Roxanna Pakkar, Caitlyn E. Clabaugh, Rhianna Lee, Eric Deng, M. Matarić","doi":"10.1109/RO-MAN46459.2019.8956468","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956468","url":null,"abstract":"Socially assistive robotics (SAR) research has shown great potential for supplementing and augmenting therapy for children with autism spectrum disorders (ASD). However, the vast majority of SAR research has been limited to short-term studies in highly controlled environments. The design and development of a SAR system capable of interacting autonomously in situ for long periods of time involves many engineering and computing challenges. This paper presents the design of a fully autonomous SAR system for long-term, in-home use with children with ASD. We address design decisions based on robustness and adaptability needs, discuss the development of the robot’s character and interactions, and provide insights from the month-long, in-home data collections with children with ASD. This work contributes to a larger research program that is exploring how SAR can be used for enhancing the social and cognitive development of children with ASD.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125691799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}