Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333679
E. B. Sandoval, Omar Mubin
We observe that in comparison to other media; online videos are a powerful outreach instrument for Computing technologies in general and specifically for Human Robot Interaction (HRI). Our experience in creating demos, scenarios and educational videos (we present two anecdotal accounts of educational HRI videos) leads us to believe that such modalities can have large pedagogical value if they are designed and conveyed appropriately. We also found that the rapid hand drawing (exemplified through fast frames in video) is an interesting technique to propagate science facts around social media. We also discuss the prospects of utilising standard MOOC's as a medium to teach HRI and thereby present a proposal of a (nano-MOOC). Furthermore, we also discuss the main challenge of being able to promote retention through HRI pedagogical material as there may be a tendency of withdrawal to deeply technical content. In conclusion, we present guidelines to promote the uptake of HRI across not only educational institutes but also the general public.
{"title":"Making HRI accessible to everyone through online videos: A proposal for a μMOOC in human robot interaction","authors":"E. B. Sandoval, Omar Mubin","doi":"10.1109/ROMAN.2015.7333679","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333679","url":null,"abstract":"We observe that in comparison to other media; online videos are a powerful outreach instrument for Computing technologies in general and specifically for Human Robot Interaction (HRI). Our experience in creating demos, scenarios and educational videos (we present two anecdotal accounts of educational HRI videos) leads us to believe that such modalities can have large pedagogical value if they are designed and conveyed appropriately. We also found that the rapid hand drawing (exemplified through fast frames in video) is an interesting technique to propagate science facts around social media. We also discuss the prospects of utilising standard MOOC's as a medium to teach HRI and thereby present a proposal of a (nano-MOOC). Furthermore, we also discuss the main challenge of being able to promote retention through HRI pedagogical material as there may be a tendency of withdrawal to deeply technical content. In conclusion, we present guidelines to promote the uptake of HRI across not only educational institutes but also the general public.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116555704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333652
Silvia Rossi, M. Staffa, Maurizio Giordano, M. D. Gregorio, Antonio Rossi, Anna Tamburro, C. Vellucci
People detection and tracking are essential capabilities in human-robot interaction (HRI). Typically, a tracker performance is evaluated by measuring objective data, such as the tracking error. However, in HRI applications, human- tracking performance does not have to be evaluated by considering it as a passive sensing behavior, but as an active sensing process, where both the robot and the human are involved within-the-loop. In this context, we foresee that the robotic non-verbal feedback, such as the head movement, plays an important role in improving the system tracking performance, as well as in reducing the human effort in the interactive tracking process. In order to verify this assumption, we evaluate a tracker performance in a joint task between a human and a robot, modeled as a game, and in three different settings. We adopt common HRI performance measures, such as the robot attention demand or the human effort, to evaluate the HRI human tracking performance scaling up with respect to the used robot feedback channels.
{"title":"Robot head movements and human effort in the evaluation of tracking performance","authors":"Silvia Rossi, M. Staffa, Maurizio Giordano, M. D. Gregorio, Antonio Rossi, Anna Tamburro, C. Vellucci","doi":"10.1109/ROMAN.2015.7333652","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333652","url":null,"abstract":"People detection and tracking are essential capabilities in human-robot interaction (HRI). Typically, a tracker performance is evaluated by measuring objective data, such as the tracking error. However, in HRI applications, human- tracking performance does not have to be evaluated by considering it as a passive sensing behavior, but as an active sensing process, where both the robot and the human are involved within-the-loop. In this context, we foresee that the robotic non-verbal feedback, such as the head movement, plays an important role in improving the system tracking performance, as well as in reducing the human effort in the interactive tracking process. In order to verify this assumption, we evaluate a tracker performance in a joint task between a human and a robot, modeled as a game, and in three different settings. We adopt common HRI performance measures, such as the robot attention demand or the human effort, to evaluate the HRI human tracking performance scaling up with respect to the used robot feedback channels.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134329143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333598
M. Ligthart, K. Truong
Selecting the suitable form of a robot, i.e. physical or virtual, for a task is not straightforward. The choice for a physical robot is not self-evident when the task is not physical but entirely social in nature. Results from previous studies comparing robots with different body types are found to be inconclusive. We performed a user study to provide a more sound comparison between a virtual and physical robot operating in a social setting. Besides body type, we manipulated the sociability of the robot. Our results show that 1) user preferences indicate that robot sociability is more important than body type for selecting a robot in a non-physical social setting, and 2) the user's attitude towards robots is an important moderating factor influencing robot preference.
{"title":"Selecting the right robot: Influence of user attitude, robot sociability and embodiment on user preferences","authors":"M. Ligthart, K. Truong","doi":"10.1109/ROMAN.2015.7333598","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333598","url":null,"abstract":"Selecting the suitable form of a robot, i.e. physical or virtual, for a task is not straightforward. The choice for a physical robot is not self-evident when the task is not physical but entirely social in nature. Results from previous studies comparing robots with different body types are found to be inconclusive. We performed a user study to provide a more sound comparison between a virtual and physical robot operating in a social setting. Besides body type, we manipulated the sociability of the robot. Our results show that 1) user preferences indicate that robot sociability is more important than body type for selecting a robot in a non-physical social setting, and 2) the user's attitude towards robots is an important moderating factor influencing robot preference.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128524807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333562
J. Hoefinghoff, A. V. D. Pütten, J. Pauli, N. Krämer
In this paper we present a system for all and sundry to create robotic applications which is characterized by adaptivity on different levels. First, users are enabled to create their own applications (e.g. play a card game with robot) by the usage of a decision making framework for robot companions. Second, within a created application the robot itself adapts its behavior via user feedback based on a decision making algorithm (e.g. teach the robot a card game based on user feedback). In dependency of the user's expertise, he or she has different possibilities of enhancing the robot's capabilities. Especially for the non-expert user a tool has been developed which provides a graphical user interface to configure applications. The usability of the tool has been evaluated with 5 participants of the age group 40+. The results show that the technical requirements to include non-experts are fulfilled by the framework but also reveal ways to improve the tool, such as the placement of the assistance mechanisms offered to the user.
{"title":"You and your robot companion — A framework for creating robotic applications usable by non-experts","authors":"J. Hoefinghoff, A. V. D. Pütten, J. Pauli, N. Krämer","doi":"10.1109/ROMAN.2015.7333562","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333562","url":null,"abstract":"In this paper we present a system for all and sundry to create robotic applications which is characterized by adaptivity on different levels. First, users are enabled to create their own applications (e.g. play a card game with robot) by the usage of a decision making framework for robot companions. Second, within a created application the robot itself adapts its behavior via user feedback based on a decision making algorithm (e.g. teach the robot a card game based on user feedback). In dependency of the user's expertise, he or she has different possibilities of enhancing the robot's capabilities. Especially for the non-expert user a tool has been developed which provides a graphical user interface to configure applications. The usability of the tool has been evaluated with 5 participants of the age group 40+. The results show that the technical requirements to include non-experts are fulfilled by the framework but also reveal ways to improve the tool, such as the placement of the assistance mechanisms offered to the user.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131603915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333670
K. Wada, Mutsuki Yageta, Motoyasu Tooyama
In field studies, observation is often used to investigate natural behaviors of human subjects. However, it requires huge burden to observers. In order to solve the problem, we have proposed “Behavior Observation Robot” which can be a substitute for human observer. In order to observe natural behavior, the robot should avoid receiving much attention from the target subjects. In this study, we investigate attention responses against the robot's movements.
{"title":"Preliminary investigation of attention responses against behavior observation Robot's movements","authors":"K. Wada, Mutsuki Yageta, Motoyasu Tooyama","doi":"10.1109/ROMAN.2015.7333670","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333670","url":null,"abstract":"In field studies, observation is often used to investigate natural behaviors of human subjects. However, it requires huge burden to observers. In order to solve the problem, we have proposed “Behavior Observation Robot” which can be a substitute for human observer. In order to observe natural behavior, the robot should avoid receiving much attention from the target subjects. In this study, we investigate attention responses against the robot's movements.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122883684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333568
A. Weiss, C. Bartneck
Standardized metrics for assessing the success of robots is a necessity for a research field to compare and validate results. The Godspeed Questionnaire Series (GQS) is one of the most frequently used questionnaires in the field of Human-Robot Interaction (HRI) with over 160 citations as of October 2014. In this paper, we present a meta analysis of studies that used the GQS. The HRI community uses a large variety of robotic platforms and only the NAO robot seems to be used by multiple research groups. A qualitative meta analysis of 18 NAO studies reveals accumulated findings on perceived intelligence, likability, and anthropomorphism, but also reveals contradictions on how the robot's behaviour and task context impact GQS ratings. The paper closes with a reflection on how added value of data analysis and presentation could be achieved for the HRI community in future.
{"title":"Meta analysis of the usage of the Godspeed Questionnaire Series","authors":"A. Weiss, C. Bartneck","doi":"10.1109/ROMAN.2015.7333568","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333568","url":null,"abstract":"Standardized metrics for assessing the success of robots is a necessity for a research field to compare and validate results. The Godspeed Questionnaire Series (GQS) is one of the most frequently used questionnaires in the field of Human-Robot Interaction (HRI) with over 160 citations as of October 2014. In this paper, we present a meta analysis of studies that used the GQS. The HRI community uses a large variety of robotic platforms and only the NAO robot seems to be used by multiple research groups. A qualitative meta analysis of 18 NAO studies reveals accumulated findings on perceived intelligence, likability, and anthropomorphism, but also reveals contradictions on how the robot's behaviour and task context impact GQS ratings. The paper closes with a reflection on how added value of data analysis and presentation could be achieved for the HRI community in future.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129381787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333657
Naoki Masuyama, C. Loo
Emotion and personality are significant factors in communication. In general, during a decision making process in human-human communication, emotional factors will be affected not only the logical thinking, but also the emotional responses. Furthermore, personality gives individual differences among people in behavior patterns, cognitive process and emotional responses. In this paper, we propose the three stages (core affects, emotion and mood) robotic emotional model with OCEAN model as the personality factors based on 2D (Pleasant-Arousal) scaling model. The emotion states in proposed model are represented on pleasant-arousal plane. The results from simulation experiment show that the proposed model is able to generate the different emotional properties based on the personality factors.
{"title":"Robotic emotional model with personality factors based on Pleasant-Arousal scaling model","authors":"Naoki Masuyama, C. Loo","doi":"10.1109/ROMAN.2015.7333657","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333657","url":null,"abstract":"Emotion and personality are significant factors in communication. In general, during a decision making process in human-human communication, emotional factors will be affected not only the logical thinking, but also the emotional responses. Furthermore, personality gives individual differences among people in behavior patterns, cognitive process and emotional responses. In this paper, we propose the three stages (core affects, emotion and mood) robotic emotional model with OCEAN model as the personality factors based on 2D (Pleasant-Arousal) scaling model. The emotion states in proposed model are represented on pleasant-arousal plane. The results from simulation experiment show that the proposed model is able to generate the different emotional properties based on the personality factors.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117236200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333694
Yasuto Tamura, Hun-ok Lim
We propose an object recognition method for service robots under the constraint of uncertain object teaching by humans. In previous object recognition methods, the training phase required a large number of prepared images and also required the training data to not have a complex background. However, for robots to perform daily tasks, they should be able to recognize objects despite unclear object teaching by humans. In order to mitigate the effect of features in the background on object recognition, our proposed method classifies local features based on saliency from video images. In this paper, we demonstrate the efficacy of the proposed method in recognizing target objects despite unclear teaching by the user.
{"title":"Object recognition using multiple instance learning with unclear object teaching","authors":"Yasuto Tamura, Hun-ok Lim","doi":"10.1109/ROMAN.2015.7333694","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333694","url":null,"abstract":"We propose an object recognition method for service robots under the constraint of uncertain object teaching by humans. In previous object recognition methods, the training phase required a large number of prepared images and also required the training data to not have a complex background. However, for robots to perform daily tasks, they should be able to recognize objects despite unclear object teaching by humans. In order to mitigate the effect of features in the background on object recognition, our proposed method classifies local features based on saliency from video images. In this paper, we demonstrate the efficacy of the proposed method in recognizing target objects despite unclear teaching by the user.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124121887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333665
L. J. Corrigan, Christina Basedow, Dennis Küster, Arvid Kappas, Christopher E. Peters, Ginevra Castellano
Engagement in task orientated social robotics is a complex phenomenon, consisting of both task and social elements. Previous work in this area tends to focus on these aspects in isolation without consideration for the positive or negative effects one might cause the other. We explore both, in an attempt to understand how engagement with the task might effect the social relationship with the robot, and vice versa. In this paper, we describe the analysis of participant self-report data collected during an exploratory pilot study used to evaluate users' “perception of engagement”. We discuss how the results of our analysis suggest that ultimately, it was the users' own perception of the robots' characteristics such as friendliness, helpfulness and attentiveness which led to sustained engagement with both the task and robot.
{"title":"Perception matters! Engagement in task orientated social robotics","authors":"L. J. Corrigan, Christina Basedow, Dennis Küster, Arvid Kappas, Christopher E. Peters, Ginevra Castellano","doi":"10.1109/ROMAN.2015.7333665","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333665","url":null,"abstract":"Engagement in task orientated social robotics is a complex phenomenon, consisting of both task and social elements. Previous work in this area tends to focus on these aspects in isolation without consideration for the positive or negative effects one might cause the other. We explore both, in an attempt to understand how engagement with the task might effect the social relationship with the robot, and vice versa. In this paper, we describe the analysis of participant self-report data collected during an exploratory pilot study used to evaluate users' “perception of engagement”. We discuss how the results of our analysis suggest that ultimately, it was the users' own perception of the robots' characteristics such as friendliness, helpfulness and attentiveness which led to sustained engagement with both the task and robot.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132373019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333700
Wataru Minoshima, Yasuhiro Fukui, Hidekatsu Ito, Suguru N. Kudoh
Toward neuroprosthetic technology, it is critical that a simple model system for interaction between brain and electric devices. For this purpose, we developed neurorobot system, Vitroid, equipped with a living neuronal network and a miniature moving robot as a body of the neurorobot. Self-Organization-Map (SOM) was employed as a generator for behavior of Vitroid. SOM was designed to map a high-dimensional feature vector to a 2-dimentional vector as the winner unit in output layer of SOM. Furthermore, neighboring units were assigned to resemble input vectors. Thus, SOM also performs pattern classifying analysis for inputted feature vector of neuronal activity. Cultured neuronal networks on Multi-Electrodes-Array (MEA) dish was alternately stimulated by two different electrodes. SOM mapped patterns induced by electrical stimulation to a 30 × 30 - 2D output layer. Only in the first step of the learning, SOM is forced to select a specific winner unit previously assigned in order to associate specific behaviors. We call this process “Seeding”. After seeding process, the winner-units correspond to the response patterns induced by two different stimuli were separately mapped. We confirmed that response patterns by two different electrical stimuli could be classified and they were almost stable. Furthermore, it revealed that spontaneous activity and evoked response shared the same patterns, suggesting that the internal autonomous activity is not only a noise, but is almost equivalent to a meaningful response. We also succeeded in collision avoidance of Vitroid by SOM-based behavior generator.
{"title":"Relationship between evoked electrical responses and robotic behavior analyzed by Self-Organization Map","authors":"Wataru Minoshima, Yasuhiro Fukui, Hidekatsu Ito, Suguru N. Kudoh","doi":"10.1109/ROMAN.2015.7333700","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333700","url":null,"abstract":"Toward neuroprosthetic technology, it is critical that a simple model system for interaction between brain and electric devices. For this purpose, we developed neurorobot system, Vitroid, equipped with a living neuronal network and a miniature moving robot as a body of the neurorobot. Self-Organization-Map (SOM) was employed as a generator for behavior of Vitroid. SOM was designed to map a high-dimensional feature vector to a 2-dimentional vector as the winner unit in output layer of SOM. Furthermore, neighboring units were assigned to resemble input vectors. Thus, SOM also performs pattern classifying analysis for inputted feature vector of neuronal activity. Cultured neuronal networks on Multi-Electrodes-Array (MEA) dish was alternately stimulated by two different electrodes. SOM mapped patterns induced by electrical stimulation to a 30 × 30 - 2D output layer. Only in the first step of the learning, SOM is forced to select a specific winner unit previously assigned in order to associate specific behaviors. We call this process “Seeding”. After seeding process, the winner-units correspond to the response patterns induced by two different stimuli were separately mapped. We confirmed that response patterns by two different electrical stimuli could be classified and they were almost stable. Furthermore, it revealed that spontaneous activity and evoked response shared the same patterns, suggesting that the internal autonomous activity is not only a noise, but is almost equivalent to a meaningful response. We also succeeded in collision avoidance of Vitroid by SOM-based behavior generator.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130353732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}