Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333621
H. Ahn, Min Ho Lee, B. MacDonald
This paper presents a robot system for healthcare facility environments. Current healthcare robot systems do not address healthcare workflows well and our goal is to provide distributed, heterogeneous multiple robot systems that are capable of integrating with healthcare workflows and are easy to modify when workflow requirements change. The proposed system consists of three subsystems: a receptionist robot system, a nurse assistant robot system, and a medical server. The roles of the receptionist robot and the nurse assistant robot are to do tasks to help the human receptionist and nurse. The healthcare robots upload and download patient information through the medical server and provide data summaries to human care givers via a web interface. We developed these healthcare robot systems based on our new robotic software framework, which is designed to easily integrate different programming frameworks, and minimize the impact of framework differences and new versions. We test the functionalities of each healthcare robot system, evaluate the robot-robot collaboration, and present a case study.
{"title":"Healthcare robot systems for a hospital environment: CareBot and ReceptionBot","authors":"H. Ahn, Min Ho Lee, B. MacDonald","doi":"10.1109/ROMAN.2015.7333621","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333621","url":null,"abstract":"This paper presents a robot system for healthcare facility environments. Current healthcare robot systems do not address healthcare workflows well and our goal is to provide distributed, heterogeneous multiple robot systems that are capable of integrating with healthcare workflows and are easy to modify when workflow requirements change. The proposed system consists of three subsystems: a receptionist robot system, a nurse assistant robot system, and a medical server. The roles of the receptionist robot and the nurse assistant robot are to do tasks to help the human receptionist and nurse. The healthcare robots upload and download patient information through the medical server and provide data summaries to human care givers via a web interface. We developed these healthcare robot systems based on our new robotic software framework, which is designed to easily integrate different programming frameworks, and minimize the impact of framework differences and new versions. We test the functionalities of each healthcare robot system, evaluate the robot-robot collaboration, and present a case study.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125445653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333592
Young-Hoon Nho, Ju-Hwan Seo, Jeong-Yean Yang, D. Kwon
Driving assistance systems (DASs) can be useful to inexperienced drivers. Current DASs are composed of front rear monitoring systems (FRMSs), lane departure warning systems (LDWSs), side obstacle warning systems (SOWSs), etc. Sometimes, DASs provide unnecessary information when using unprocessed low-level data. Therefore, to provide high-level necessary information to the driver, DASs need to be improved. In this paper, we present an intelligent driving assistance robotic agent for safe driving. We recognize seven driving situations, namely, speed bump, corner, crowded area, uphill, downhill, straight, and parking space, using hidden Markov models (HMMs) based on velocity, accelerator pedal, and steering wheel. The seven situations and global positioning system information are used to generate a situation information map. The developers of a navigation system have to tag driving events by themselves. In contrast, our driving assistance agent tags situation information automatically as the vehicle is driven. The robotic agent uses the driving situation and status information to assist safe driving with motions and facial and verbal expressions.
{"title":"Driving situation-based real-time interaction with intelligent driving assistance agent","authors":"Young-Hoon Nho, Ju-Hwan Seo, Jeong-Yean Yang, D. Kwon","doi":"10.1109/ROMAN.2015.7333592","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333592","url":null,"abstract":"Driving assistance systems (DASs) can be useful to inexperienced drivers. Current DASs are composed of front rear monitoring systems (FRMSs), lane departure warning systems (LDWSs), side obstacle warning systems (SOWSs), etc. Sometimes, DASs provide unnecessary information when using unprocessed low-level data. Therefore, to provide high-level necessary information to the driver, DASs need to be improved. In this paper, we present an intelligent driving assistance robotic agent for safe driving. We recognize seven driving situations, namely, speed bump, corner, crowded area, uphill, downhill, straight, and parking space, using hidden Markov models (HMMs) based on velocity, accelerator pedal, and steering wheel. The seven situations and global positioning system information are used to generate a situation information map. The developers of a navigation system have to tag driving events by themselves. In contrast, our driving assistance agent tags situation information automatically as the vehicle is driven. The robotic agent uses the driving situation and status information to assist safe driving with motions and facial and verbal expressions.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125984208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Among treatments for children with ASD, assistive robots are growing popular as they are able to elicit different social behaviors. At the same time, technology is providing methods to automatically collect quantitative data, in order to assist the therapist evaluating the children progress. In this study we introduce a system composed of multiple spherical devices, as well as the design of a turn taking activity using those devices. In previous studies turn taking was found to be an important social skill for development and engaging in activities with others. To evaluate the system performance, a single case experiment with a boy with ASD and the developed device was done. During the activity, the device had two different roles: to engage the child on the turn taking activity and to provide information to the therapist that describe the child's behavior. We could successfully collect quantitative data that represent turn taking as well as we could observe how does the boy manipulate and interact with the device. Based on the results, we are motivated to keep exploring the potential application of devices mediated activities for children with ASD.
{"title":"Paired robotic devices to mediate and represent social behaviors","authors":"Eleuda Nuñez, Soichiro Matsuda, Masakazu Hirokawa, Jun-ichi Yamamoto, Kenji Suzuki","doi":"10.1109/ROMAN.2015.7333669","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333669","url":null,"abstract":"Among treatments for children with ASD, assistive robots are growing popular as they are able to elicit different social behaviors. At the same time, technology is providing methods to automatically collect quantitative data, in order to assist the therapist evaluating the children progress. In this study we introduce a system composed of multiple spherical devices, as well as the design of a turn taking activity using those devices. In previous studies turn taking was found to be an important social skill for development and engaging in activities with others. To evaluate the system performance, a single case experiment with a boy with ASD and the developed device was done. During the activity, the device had two different roles: to engage the child on the turn taking activity and to provide information to the therapist that describe the child's behavior. We could successfully collect quantitative data that represent turn taking as well as we could observe how does the boy manipulate and interact with the device. Based on the results, we are motivated to keep exploring the potential application of devices mediated activities for children with ASD.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122813557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333639
Minhua Zheng, Jiaole Wang, M. Meng
In this paper, two gesture design methods for the humanoid robot NAO are proposed and compared. The first method is mapping human motions to the robot by an RGB-D sensor and kinematic modeling. The second method is based on hand-puppeteering. Thirteen subjects are recruited to design a forearm waving gesture for a NAO robot by the two methods. The obtained two groups of forearm waving gestures are then compared by another sixteen subjects. Our experimental results indicate that the forearm waving gestures obtained from the hand-puppeteering method are slower and have smaller range of motion than those obtained from the motion mapping method. Besides, people tend to perceive the forearm waving gestures obtained from the hand-puppeteering method as more likeable and as conveying the greeting message better than those obtained from the motion mapping method. This work contributes to a better understanding of the nature of the two gesture design methods and offers instructive reference for robot behavior designers on design method choosing.
{"title":"Comparing two gesture design methods for a humanoid robot: Human motion mapping by an RGB-D sensor and hand-puppeteering","authors":"Minhua Zheng, Jiaole Wang, M. Meng","doi":"10.1109/ROMAN.2015.7333639","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333639","url":null,"abstract":"In this paper, two gesture design methods for the humanoid robot NAO are proposed and compared. The first method is mapping human motions to the robot by an RGB-D sensor and kinematic modeling. The second method is based on hand-puppeteering. Thirteen subjects are recruited to design a forearm waving gesture for a NAO robot by the two methods. The obtained two groups of forearm waving gestures are then compared by another sixteen subjects. Our experimental results indicate that the forearm waving gestures obtained from the hand-puppeteering method are slower and have smaller range of motion than those obtained from the motion mapping method. Besides, people tend to perceive the forearm waving gestures obtained from the hand-puppeteering method as more likeable and as conveying the greeting message better than those obtained from the motion mapping method. This work contributes to a better understanding of the nature of the two gesture design methods and offers instructive reference for robot behavior designers on design method choosing.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123551777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333578
S. Shamsuddin, H. Yussof, F. A. Hanapiah, Salina Mohamed
This research presents the validation study of a qualitative tool to analyze the response in robot-based intervention. The 24 behavioral items in the tool were determined through routine observations carried out by clinicians and the definitions of autism adopted by the Diagnostic and Statistical Manual of Mental Disorders: Fourth Edition-Text Revision (DSM-IV-TR). 34 experts determined content validity and tool reliability by viewpoints through the Likert scale. The tool was found to have good content validity with more than 67% of experts scored at least 3 on the 5-point Likert scale. Cronbach's alpha coefficient of 0.872 reflected the tool's content reliability and internal consistency. The tool was used to analyze the behavior response of children with autism when exposed to a humanoid robot. It functioned as a score-sheet to compare the behavior of autistic children with and without the presence of a robot. These findings put forward a tool with contents considered valid to evaluate behavior outcome of studies involving children with autism and robots.
{"title":"A content validated tool to observe autism behavior in child-robot interaction","authors":"S. Shamsuddin, H. Yussof, F. A. Hanapiah, Salina Mohamed","doi":"10.1109/ROMAN.2015.7333578","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333578","url":null,"abstract":"This research presents the validation study of a qualitative tool to analyze the response in robot-based intervention. The 24 behavioral items in the tool were determined through routine observations carried out by clinicians and the definitions of autism adopted by the Diagnostic and Statistical Manual of Mental Disorders: Fourth Edition-Text Revision (DSM-IV-TR). 34 experts determined content validity and tool reliability by viewpoints through the Likert scale. The tool was found to have good content validity with more than 67% of experts scored at least 3 on the 5-point Likert scale. Cronbach's alpha coefficient of 0.872 reflected the tool's content reliability and internal consistency. The tool was used to analyze the behavior response of children with autism when exposed to a humanoid robot. It functioned as a score-sheet to compare the behavior of autistic children with and without the presence of a robot. These findings put forward a tool with contents considered valid to evaluate behavior outcome of studies involving children with autism and robots.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116495620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333593
T. Omori, T. Shimotomai, Kasumi Abe, T. Nagai
Though communication is one of our basic activity, it is not always that we can interact effectively. It is well known that a key point for a successful interaction is the inclusion of other with a good mood. It means acquisition of other's interest is a precondition for a successful communication.
{"title":"Model of strategic behavior for interaction that guide others internal state","authors":"T. Omori, T. Shimotomai, Kasumi Abe, T. Nagai","doi":"10.1109/ROMAN.2015.7333593","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333593","url":null,"abstract":"Though communication is one of our basic activity, it is not always that we can interact effectively. It is well known that a key point for a successful interaction is the inclusion of other with a good mood. It means acquisition of other's interest is a precondition for a successful communication.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132958578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333623
Tomoko Yonezawa, Naoto Yoshida, Jumpei Nishinaka
This paper proposes an effective communication with an agent's appearance and reality of the acceptance level of the user's order using verbal and nonverbal expressions. The crossmodal combination is expected to enable delicate expressions of the agent's internal state especially when the agent's decision of the acceptance is different from the agent's mind, so the user becomes aware of the difficult situations. We have proposed to adopt the facial expression and the flexion of the word ending as the parameters of the agent's internal state. The expressions are attached to the scripts of each acceptance level. The results of the expressions showed the following: 1) the effectiveness of both the facial and flexion expressions, and 2) the crossmodal combinations that generate the agent's concealed but real feeling.
{"title":"Crossmodal combination among verbal, facial, and flexion expression for anthropomorphic acceptability","authors":"Tomoko Yonezawa, Naoto Yoshida, Jumpei Nishinaka","doi":"10.1109/ROMAN.2015.7333623","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333623","url":null,"abstract":"This paper proposes an effective communication with an agent's appearance and reality of the acceptance level of the user's order using verbal and nonverbal expressions. The crossmodal combination is expected to enable delicate expressions of the agent's internal state especially when the agent's decision of the acceptance is different from the agent's mind, so the user becomes aware of the difficult situations. We have proposed to adopt the facial expression and the flexion of the word ending as the parameters of the agent's internal state. The expressions are attached to the scripts of each acceptance level. The results of the expressions showed the following: 1) the effectiveness of both the facial and flexion expressions, and 2) the crossmodal combinations that generate the agent's concealed but real feeling.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129814253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333650
Konstantinos Theofilis, Chrystopher L. Nehaniv, K. Dautenhahn
The question of what to imitate is pivotal for imitation learning in robotics. When the robot's tutor is a naive user, it is very difficult for the embodied agent to account for the unpredictability of the tutor's behaviour. Preliminary results from a previous study suggested that the phenomenon of temporal emphasis, i.e., that tutors tend to keep the goal state of the demonstrated task stationary longer than the sub-states, can be used to recognise that task. In the present paper, the previous study is expanded and the existence of the phenomenon is investigated further. An improved experimental setup, using the iCub humanoid robot and naive users, was implemented. Analysis of the data showed that the phenomenon was detected in the majority of the cases, with a strongly significant result. In the few cases that the end state was not the one with the longest time span, it was a borderline second. Then, a very simple algorithm using a single binary criterion was used to show that the phenomenon exists and can be detected easily. That leads to the argument that humans may also be able to detect this phenomenon and use it for recognizing, as learners or emphasizing and teaching as tutors, the end goal, at least for tasks with clear and separate sub-goal sequences. A robot that implements this behavior could be able to perform better both as a tutor and as a learner when interacting with naive users.
{"title":"Goal recognition using temporal emphasis","authors":"Konstantinos Theofilis, Chrystopher L. Nehaniv, K. Dautenhahn","doi":"10.1109/ROMAN.2015.7333650","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333650","url":null,"abstract":"The question of what to imitate is pivotal for imitation learning in robotics. When the robot's tutor is a naive user, it is very difficult for the embodied agent to account for the unpredictability of the tutor's behaviour. Preliminary results from a previous study suggested that the phenomenon of temporal emphasis, i.e., that tutors tend to keep the goal state of the demonstrated task stationary longer than the sub-states, can be used to recognise that task. In the present paper, the previous study is expanded and the existence of the phenomenon is investigated further. An improved experimental setup, using the iCub humanoid robot and naive users, was implemented. Analysis of the data showed that the phenomenon was detected in the majority of the cases, with a strongly significant result. In the few cases that the end state was not the one with the longest time span, it was a borderline second. Then, a very simple algorithm using a single binary criterion was used to show that the phenomenon exists and can be detected easily. That leads to the argument that humans may also be able to detect this phenomenon and use it for recognizing, as learners or emphasizing and teaching as tutors, the end goal, at least for tasks with clear and separate sub-goal sequences. A robot that implements this behavior could be able to perform better both as a tutor and as a learner when interacting with naive users.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"296 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134476077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333699
Caroline Rizzi Raymundo, Colin G. Johnson, P. A. Vargas
This work proposes a theoretical architectural model based on the brain's fear learning system with the purpose of generating artificial fear conditioning at both stimuli and context abstraction levels in robot companions. The proposed architecture is inspired by the different brain regions involved in fear learning, here divided into four modules that work in an integrated and parallel manner: the sensory system, the amygdala system, the hippocampal system and the working memory. Each of these modules is based on a different approach and performs a different task in the process of learning and memorizing environmental cues to predict the occurrence of unpleasant situations. The main contribution of the model proposed here is the integration of fear learning and context awareness in order to fuse emotional and contextual artificial memories. The purpose is to provide robots with more believable social responses, leading to more natural interactions between humans and robots.
{"title":"An architecture for emotional and context-aware associative learning for robot companions","authors":"Caroline Rizzi Raymundo, Colin G. Johnson, P. A. Vargas","doi":"10.1109/ROMAN.2015.7333699","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333699","url":null,"abstract":"This work proposes a theoretical architectural model based on the brain's fear learning system with the purpose of generating artificial fear conditioning at both stimuli and context abstraction levels in robot companions. The proposed architecture is inspired by the different brain regions involved in fear learning, here divided into four modules that work in an integrated and parallel manner: the sensory system, the amygdala system, the hippocampal system and the working memory. Each of these modules is based on a different approach and performs a different task in the process of learning and memorizing environmental cues to predict the occurrence of unpleasant situations. The main contribution of the model proposed here is the integration of fear learning and context awareness in order to fuse emotional and contextual artificial memories. The purpose is to provide robots with more believable social responses, leading to more natural interactions between humans and robots.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"333 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122139287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many studies have addressed the affective robot, a robot that can express emotion, in the field of human-robot interaction. Really useful applications, however, can only be designed if the effect of such expressions on the user are completely elucidated. In this paper, we propose a new useful application scenario for the affective robot that shares the user's experience and describe an experiment in which the user's experience is altered by the presence of the affective robot. As the stimulus, we use movie scenes to evoke 4 types of emotion: excitement, fright, depression, and relaxation. Twenty four participants watch different movies under three conditions: no robot present, with robot that offers appropriate emotional expression, and with robot that has random emotional expression. The results show that the participants watching with the appropriate emotion robot experienced stronger emotion with exciting and relaxing movies and weaker emotion with scary movies than is true without the robot. These changes in the viewer's experience did not occur when watching with the random emotion robot. From the results, we extract design points of affective robot behavior for enhancing user experience. This research is novel in terms of examining the impact of robot emotion, seen as appropriate by the viewer, on the viewer's experience.
{"title":"Towards enhancing human experience by affective robots: Experiment and discussion","authors":"Takahiro Matsumoto, Shunichi Seko, Ryosuke Aoki, Akihiro Miyata, Tomoki Watanabe, Tomohiro Yamada","doi":"10.1109/ROMAN.2015.7333591","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333591","url":null,"abstract":"Many studies have addressed the affective robot, a robot that can express emotion, in the field of human-robot interaction. Really useful applications, however, can only be designed if the effect of such expressions on the user are completely elucidated. In this paper, we propose a new useful application scenario for the affective robot that shares the user's experience and describe an experiment in which the user's experience is altered by the presence of the affective robot. As the stimulus, we use movie scenes to evoke 4 types of emotion: excitement, fright, depression, and relaxation. Twenty four participants watch different movies under three conditions: no robot present, with robot that offers appropriate emotional expression, and with robot that has random emotional expression. The results show that the participants watching with the appropriate emotion robot experienced stronger emotion with exciting and relaxing movies and weaker emotion with scary movies than is true without the robot. These changes in the viewer's experience did not occur when watching with the random emotion robot. From the results, we extract design points of affective robot behavior for enhancing user experience. This research is novel in terms of examining the impact of robot emotion, seen as appropriate by the viewer, on the viewer's experience.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129077333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}