Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333642
T. Motoyoshi, Shun Kakehashi, H. Masuta, K. Koyanagi, T. Oshima, H. Kawakami
P-CUBE is a block type programming tool for begginers including visually impaired. This paper describes a study conducted on the usefulness of P-CUBE as an education tool for a programming begginer. We conducted an experiment for comparing P-CUBE and a conventional code type programming soft for a mobile robot. We record the times of referencing the manual of programming and the time required for programming exercise during the experiment. We get subjective assessment from subjects at the end of the experiment. The result showed that P-CUBE is useful as a programming education tool for beginners.
{"title":"The usefulness of P-CUBE as a programming education tool for programming beginners","authors":"T. Motoyoshi, Shun Kakehashi, H. Masuta, K. Koyanagi, T. Oshima, H. Kawakami","doi":"10.1109/ROMAN.2015.7333642","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333642","url":null,"abstract":"P-CUBE is a block type programming tool for begginers including visually impaired. This paper describes a study conducted on the usefulness of P-CUBE as an education tool for a programming begginer. We conducted an experiment for comparing P-CUBE and a conventional code type programming soft for a mobile robot. We record the times of referencing the manual of programming and the time required for programming exercise during the experiment. We get subjective assessment from subjects at the end of the experiment. The result showed that P-CUBE is useful as a programming education tool for beginners.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121524916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333697
Cristina Diaz-Montilla, A. P. D. Pobil
The aim of this article is to investigate the effectiveness of the body language of a humanoid robot to induce emotions. It is based on the principles of positive psychology (PP), social learning, therapeutic robotics and mood induction procedures (MIPs). According to the Velten Method to induce a positive mood, the body language of a humanoid robot is used here as a modulated variable to test its efficacy inducing emotions. We have three hypotheses: (H1) Positive body language reinforces the positive attitude of the Velten positive statements; (H2) Body language which expresses the opposite attitude that the one expressed by Velten statements, that is a negative attitude, can vary negatively the mood induction results; (H3) The more positive the body language is, the higher the positive induction effect is; We have run experiments with 48 volunteers to test these hypotheses. Results show that the hypotheses (H1) and (H2) are correct, but it is not the case for (H3), which is not confirmed with an exaggerated expression of elated mood. Furthermore, our new combined MIP has a significant effect size to induce positive emotions.
{"title":"How important is body language in mood induction procedures with a humanoid robot?","authors":"Cristina Diaz-Montilla, A. P. D. Pobil","doi":"10.1109/ROMAN.2015.7333697","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333697","url":null,"abstract":"The aim of this article is to investigate the effectiveness of the body language of a humanoid robot to induce emotions. It is based on the principles of positive psychology (PP), social learning, therapeutic robotics and mood induction procedures (MIPs). According to the Velten Method to induce a positive mood, the body language of a humanoid robot is used here as a modulated variable to test its efficacy inducing emotions. We have three hypotheses: (H1) Positive body language reinforces the positive attitude of the Velten positive statements; (H2) Body language which expresses the opposite attitude that the one expressed by Velten statements, that is a negative attitude, can vary negatively the mood induction results; (H3) The more positive the body language is, the higher the positive induction effect is; We have run experiments with 48 volunteers to test these hypotheses. Results show that the hypotheses (H1) and (H2) are correct, but it is not the case for (H3), which is not confirmed with an exaggerated expression of elated mood. Furthermore, our new combined MIP has a significant effect size to induce positive emotions.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122473711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333638
C. Recchiuto, A. Sgorbissa, R. Zaccaria
A common way to organize a high number of robots, both when moving autonomously and when controlled by a human operator, is to let them move in formation. This is a principle that takes inspiration from the nature, that maximizes the possibility of monitoring the environment and therefore of anticipating risks and finding targets. In robotics, alongside these reasons, the organization of a robot team in a formation allows a human operator to deal with a high number of agents in a simpler way, moving the swarm as a single entity. In this context, the typology of visual feedback is fundamental for a correct situational awareness, but in common practice having an optimal camera configuration is not always possible. Usually human operators use cameras on board the multirotors, with an egocentric point of view, while it is known that in mobile robotics overall awareness and pattern recognition are optimized by exocentric views. In this article we present an analysis of the performance achieved by human operators controlling a swarm of UAVs in formation, accomplishing different tasks and using different point of views. The control architecture is implemented in a ROS framework and interfaced with a 3D simulation environment. Experimental tests show a degradation of performance while using egocentric cameras with respect of an exocentric point of view, although cameras on board the robots allow to satisfactorily accomplish simple tasks.
{"title":"Usability evaluation with different viewpoints of a Human-Swarm interface for UAVs control in formation","authors":"C. Recchiuto, A. Sgorbissa, R. Zaccaria","doi":"10.1109/ROMAN.2015.7333638","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333638","url":null,"abstract":"A common way to organize a high number of robots, both when moving autonomously and when controlled by a human operator, is to let them move in formation. This is a principle that takes inspiration from the nature, that maximizes the possibility of monitoring the environment and therefore of anticipating risks and finding targets. In robotics, alongside these reasons, the organization of a robot team in a formation allows a human operator to deal with a high number of agents in a simpler way, moving the swarm as a single entity. In this context, the typology of visual feedback is fundamental for a correct situational awareness, but in common practice having an optimal camera configuration is not always possible. Usually human operators use cameras on board the multirotors, with an egocentric point of view, while it is known that in mobile robotics overall awareness and pattern recognition are optimized by exocentric views. In this article we present an analysis of the performance achieved by human operators controlling a swarm of UAVs in formation, accomplishing different tasks and using different point of views. The control architecture is implemented in a ROS framework and interfaced with a 3D simulation environment. Experimental tests show a degradation of performance while using egocentric cameras with respect of an exocentric point of view, although cameras on board the robots allow to satisfactorily accomplish simple tasks.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132420860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333667
B. Malle, Matthias Scheutz
We propose that moral competence consists of five distinct but related elements: (1) having a system of norms; (2) mastering a moral vocabulary; (3) exhibiting moral cognition and affect; (4) exhibiting moral decision making and action; and (5) engaging in moral communication. We identify some of the likely triggers that may convince people to (justifiably) ascribe each of these elements of moral competence to robots. We suggest that humans will treat robots as moral agents (who have some rights, obligations, and are targets of blame) if they perceive them to have at least elements (1) and (2) and one or more of elements (3)-(5).
{"title":"When will people regard robots as morally competent social partners?","authors":"B. Malle, Matthias Scheutz","doi":"10.1109/ROMAN.2015.7333667","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333667","url":null,"abstract":"We propose that moral competence consists of five distinct but related elements: (1) having a system of norms; (2) mastering a moral vocabulary; (3) exhibiting moral cognition and affect; (4) exhibiting moral decision making and action; and (5) engaging in moral communication. We identify some of the likely triggers that may convince people to (justifiably) ascribe each of these elements of moral competence to robots. We suggest that humans will treat robots as moral agents (who have some rights, obligations, and are targets of blame) if they perceive them to have at least elements (1) and (2) and one or more of elements (3)-(5).","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131477310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333682
Kazuhiro Sasabuchi, Youhei Kakiuchi, K. Okada, M. Inaba
Recent research in psychology argue the importance of “context” in emotion perception. According to these recent studies, facial expressions do not possess discrete emotional meanings; rather the meaning depends on the social situation of how and when the expressions are used. These research results imply that the emotion expressivity depends on the appropriate combination of context and expression, and not the distinctiveness of the expressions themselves. Therefore, it is inferable that relying on facial expressions may not be essential. Instead, when appropriate pairs of context and expression are applied, emotional internal states perhaps emerge. This paper first discusses how facial expressions of robots limit their head design, and can be hardware costly. Then, the paper proposes a way of expressing context-based emotions as an alternative to facial expressions. The paper introduces the mechanical structure for applying a specific non-facial contextual expression. The expression was originated from Japanese animation, and the mechanism was applied to a real desktop size humanoid robot. Finally, an experiment on whether the contextual expression is capable of linking humanoid motions and its emotional internal states was conducted under a sound-context condition. Although the results are limited in cultural aspects, this paper presents the possibilities of future robotic interface for emotion-expressive and interactive humanoid robots.
{"title":"Design and implementation of multi-dimensional flexible antena-like hair motivated by ‘Aho-Hair’ in Japanese anime cartoons: Internal state expressions beyond design limitations","authors":"Kazuhiro Sasabuchi, Youhei Kakiuchi, K. Okada, M. Inaba","doi":"10.1109/ROMAN.2015.7333682","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333682","url":null,"abstract":"Recent research in psychology argue the importance of “context” in emotion perception. According to these recent studies, facial expressions do not possess discrete emotional meanings; rather the meaning depends on the social situation of how and when the expressions are used. These research results imply that the emotion expressivity depends on the appropriate combination of context and expression, and not the distinctiveness of the expressions themselves. Therefore, it is inferable that relying on facial expressions may not be essential. Instead, when appropriate pairs of context and expression are applied, emotional internal states perhaps emerge. This paper first discusses how facial expressions of robots limit their head design, and can be hardware costly. Then, the paper proposes a way of expressing context-based emotions as an alternative to facial expressions. The paper introduces the mechanical structure for applying a specific non-facial contextual expression. The expression was originated from Japanese animation, and the mechanism was applied to a real desktop size humanoid robot. Finally, an experiment on whether the contextual expression is capable of linking humanoid motions and its emotional internal states was conducted under a sound-context condition. Although the results are limited in cultural aspects, this paper presents the possibilities of future robotic interface for emotion-expressive and interactive humanoid robots.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131686026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333647
F. Eyssel, Michaela Pfundmair
We conducted a human-robot interaction (HRI) experiment in which we tested the effect of inclusionary status (social inclusion vs. social exclusion) and a dispositional correlate of anthropomorphism on social needs fulfillment and the evaluation of a social robot, respectively. The experiment was initiated by an interaction phase including free play between the user and the zoomorphic robot Pleo. This was followed by the experimental manipulation according to which participants were exposed to an experience of social inclusion or social exclusion during a computer game. Subsequently, participants evaluated the robot regarding psychological anthropomorphism, mind perception, and reported the experienced fulfillment of social needs as well as their individual disposition to anthropomorphize. The present research aimed at demonstrating that situationally induced inclusionary status should predominantly influence experienced social needs fulfillment, but not anthropomorphic inferences about a robot. Analogously, we presumed that evaluations of the robot should mainly be driven by the individual disposition to anthropomorphize nonhuman entities, whereas inclusionary status should not affect these judgments. As predicted, inclusionary status only affected experienced social needs fulfillment, whereas the experimental manipulation did not affect robot-related evaluations. In a similar vein, participants low (vs. high) in anthropomorphism differed in their assessment of humanity and mind perception of the robot prototype, whereas inclusionary status did not affect these anthropomorphic inferences. Results are discussed in light of the existing literature on social exclusion, social needs fulfillment, and anthropomorphization of robots.
{"title":"Predictors of psychological anthropomorphization, mind perception, and the fulfillment of social needs: A case study with a zoomorphic robot","authors":"F. Eyssel, Michaela Pfundmair","doi":"10.1109/ROMAN.2015.7333647","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333647","url":null,"abstract":"We conducted a human-robot interaction (HRI) experiment in which we tested the effect of inclusionary status (social inclusion vs. social exclusion) and a dispositional correlate of anthropomorphism on social needs fulfillment and the evaluation of a social robot, respectively. The experiment was initiated by an interaction phase including free play between the user and the zoomorphic robot Pleo. This was followed by the experimental manipulation according to which participants were exposed to an experience of social inclusion or social exclusion during a computer game. Subsequently, participants evaluated the robot regarding psychological anthropomorphism, mind perception, and reported the experienced fulfillment of social needs as well as their individual disposition to anthropomorphize. The present research aimed at demonstrating that situationally induced inclusionary status should predominantly influence experienced social needs fulfillment, but not anthropomorphic inferences about a robot. Analogously, we presumed that evaluations of the robot should mainly be driven by the individual disposition to anthropomorphize nonhuman entities, whereas inclusionary status should not affect these judgments. As predicted, inclusionary status only affected experienced social needs fulfillment, whereas the experimental manipulation did not affect robot-related evaluations. In a similar vein, participants low (vs. high) in anthropomorphism differed in their assessment of humanity and mind perception of the robot prototype, whereas inclusionary status did not affect these anthropomorphic inferences. Results are discussed in light of the existing literature on social exclusion, social needs fulfillment, and anthropomorphization of robots.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132093507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333581
Hiromi Watanabe, T. Tanzawa, Tsuyoshi Shimizu, S. Kotani
During a disaster, it may be difficult for the visually impaired to use infrastructure or to obtain care worker's support. To support the independence of the visually impaired, we have developed a wearable travel aid that assists in safe travel, including navigating stairs, without special infrastructure. Because safe navigation requires accurate position estimation, our system fuses data from different sensors to estimate a pedestrian's position. In this paper, we propose a floor estimation method based on fuzzy inference with a laser range finder (LRF). When walking on a floor, an image processing system and a LRF are used together to estimate the position, but when walking on stairs, a second LRF is used to estimate the floor and detect obstacles. The experimental results show that our floor estimation is also useful for position estimation.
{"title":"Floor estimation by a wearable travel aid for visually impaired","authors":"Hiromi Watanabe, T. Tanzawa, Tsuyoshi Shimizu, S. Kotani","doi":"10.1109/ROMAN.2015.7333581","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333581","url":null,"abstract":"During a disaster, it may be difficult for the visually impaired to use infrastructure or to obtain care worker's support. To support the independence of the visually impaired, we have developed a wearable travel aid that assists in safe travel, including navigating stairs, without special infrastructure. Because safe navigation requires accurate position estimation, our system fuses data from different sensors to estimate a pedestrian's position. In this paper, we propose a floor estimation method based on fuzzy inference with a laser range finder (LRF). When walking on a floor, an image processing system and a LRF are used together to estimate the position, but when walking on stairs, a second LRF is used to estimate the floor and detect obstacles. The experimental results show that our floor estimation is also useful for position estimation.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114683474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333634
P. Marti, I. Iacono
This paper describes the results of a research conducted in the European project Accompany, whose aim is to provide older people with services in a motivating and socially acceptable manner to facilitate independent living at home. The project developed a system consisting of a robotic companion, Care-O-bot, as part of a smart environment. An intensive research was conducted to investigate and experiment with robot behaviours that trigger empathic exchanges between an older person and the robot. The paper is articulated in two parts. The first part illustrates the theory that inspired the development of a context-aware Graphical User Interface (GUI) used to interact with the robot. The GUI integrates an expressive mask allowing perspective taking with the aim to stimulate empathic exchanges. The second part focuses on the user evaluation, and reports the outcomes from three different tests. The results of the first two tests show a positive acceptance of the GUI by the older people. The final test reports qualitative comments by senior participants on the occurrence of empathic exchanges with the robot.
{"title":"Social and empathic behaviours: Novel interfaces and interaction modalities","authors":"P. Marti, I. Iacono","doi":"10.1109/ROMAN.2015.7333634","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333634","url":null,"abstract":"This paper describes the results of a research conducted in the European project Accompany, whose aim is to provide older people with services in a motivating and socially acceptable manner to facilitate independent living at home. The project developed a system consisting of a robotic companion, Care-O-bot, as part of a smart environment. An intensive research was conducted to investigate and experiment with robot behaviours that trigger empathic exchanges between an older person and the robot. The paper is articulated in two parts. The first part illustrates the theory that inspired the development of a context-aware Graphical User Interface (GUI) used to interact with the robot. The GUI integrates an expressive mask allowing perspective taking with the aim to stimulate empathic exchanges. The second part focuses on the user evaluation, and reports the outcomes from three different tests. The results of the first two tests show a positive acceptance of the GUI by the older people. The final test reports qualitative comments by senior participants on the occurrence of empathic exchanges with the robot.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123074082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333607
Ryotaro Kuriya, T. Tsujimura, K. Izumi
A new augmented reality system is proposed to apply to robot navigation in this paper. It generates pattern IDs by analyzing infrared optical markers projected in the real world. The system enables the robot to guide a complicated motion by the robot commands corresponding to the pattern IDs. A prototype system is constructed to conduct fundamental experiments of remote robot navigation.
{"title":"Augmented reality robot navigation using infrared marker","authors":"Ryotaro Kuriya, T. Tsujimura, K. Izumi","doi":"10.1109/ROMAN.2015.7333607","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333607","url":null,"abstract":"A new augmented reality system is proposed to apply to robot navigation in this paper. It generates pattern IDs by analyzing infrared optical markers projected in the real world. The system enables the robot to guide a complicated motion by the robot commands corresponding to the pattern IDs. A prototype system is constructed to conduct fundamental experiments of remote robot navigation.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130190471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333635
T. Nagai, Kasumi Abe, Tomoaki Nakamura, N. Oka, T. Omori
Intimacy is a very important factor not only for the communication between humans but also for the communication between humans and robots. In this research we propose an action decision model based on others' friendliness value, which can be estimated using the mental models of others. We examine the mutual adaptation process of two agents, each of which has its own model of others, through the interaction between them.
{"title":"Probabilistic modeling of mental models of others","authors":"T. Nagai, Kasumi Abe, Tomoaki Nakamura, N. Oka, T. Omori","doi":"10.1109/ROMAN.2015.7333635","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333635","url":null,"abstract":"Intimacy is a very important factor not only for the communication between humans but also for the communication between humans and robots. In this research we propose an action decision model based on others' friendliness value, which can be estimated using the mental models of others. We examine the mutual adaptation process of two agents, each of which has its own model of others, through the interaction between them.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"53 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122478794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}