Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333605
Masaaki Takahashi, Masa Ogata, M. Imai, Keisuke Nakamura, K. Nakadai
The study of the telepresence robot as a tool for telecommunication from a remote location is attracting a considerable amount of attention. However, the problem arises that a telepresence robot system does not allow the volume of the user's utterance to be adjusted precisely, because it does not consider varying conditions in the sound environment, such as noise. In addition, when talking with several people in remote location, the user would like to be able to change the speaker volume freely according to the situation. In a previous study, a telepresence robot was proposed that has a function that automatically regulates the volume of the user's utterance. However, the manner in which the user exploits this function in a practical situation needs to be investigated. We propose a telepresence conversation robot system called “TeleCoBot.” TeleCoBot includes an operator's user interface, through which the volume of the user's utterance can be automatically regulated according to the distance between the robot and the conversation partner and the noise level in the robot's environment. We conducted a case study, in which the participants played a game using TeleCoBot's interface. The results of the study reveal the manner in which the participants used TeleCoBot and the additional factors that the system requires.
{"title":"A case study of an automatic volume control interface for a telepresence system","authors":"Masaaki Takahashi, Masa Ogata, M. Imai, Keisuke Nakamura, K. Nakadai","doi":"10.1109/ROMAN.2015.7333605","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333605","url":null,"abstract":"The study of the telepresence robot as a tool for telecommunication from a remote location is attracting a considerable amount of attention. However, the problem arises that a telepresence robot system does not allow the volume of the user's utterance to be adjusted precisely, because it does not consider varying conditions in the sound environment, such as noise. In addition, when talking with several people in remote location, the user would like to be able to change the speaker volume freely according to the situation. In a previous study, a telepresence robot was proposed that has a function that automatically regulates the volume of the user's utterance. However, the manner in which the user exploits this function in a practical situation needs to be investigated. We propose a telepresence conversation robot system called “TeleCoBot.” TeleCoBot includes an operator's user interface, through which the volume of the user's utterance can be automatically regulated according to the distance between the robot and the conversation partner and the noise level in the robot's environment. We conducted a case study, in which the participants played a game using TeleCoBot's interface. The results of the study reveal the manner in which the participants used TeleCoBot and the additional factors that the system requires.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125884777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333577
Keisuke Nonomura, K. Terada, A. Ito, S. Yamada
Human-interactive robots are assessed according to various factors, such as behavior, appearance, and quality of interaction. In the present study, we investigated the hypothesis that impressions of an unattractive robot will be improved by emotional interaction with physical touch with the robot. An experiment with human subjects confirmed that the evaluations of the intimacy factor of unattractive robots were improved after two minutes of physical and emotional interaction with such robots.
{"title":"Effects of interaction and appearance on subjective impression of robots","authors":"Keisuke Nonomura, K. Terada, A. Ito, S. Yamada","doi":"10.1109/ROMAN.2015.7333577","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333577","url":null,"abstract":"Human-interactive robots are assessed according to various factors, such as behavior, appearance, and quality of interaction. In the present study, we investigated the hypothesis that impressions of an unattractive robot will be improved by emotional interaction with physical touch with the robot. An experiment with human subjects confirmed that the evaluations of the intimacy factor of unattractive robots were improved after two minutes of physical and emotional interaction with such robots.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127656715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333626
Mamoun Gharbi, Pierre-Vincent Paubel, A. Clodic, O. Carreras, R. Alami, J. Cellier
Handing-over objects to humans (or taking objects from them) is a key capability for a service robot. Humans are efficient and natural while performing this action and the purpose of the studies on this topic is to bring human-robot handovers to an acceptable, efficient and natural level. This paper deals with the cues that allow to make a handover look as natural as possible, and more precisely we focus on where the robot should look while performing it. In this context we propose a user study, involving 33 volunteers, who judged video sequences where they see either a human or a robot giving them an object. They were presented with different sequences where the agents (robot or human) have different gaze behaviours, and were asked to give their feeling about the sequence naturalness. In addition to this subjective measure, the volunteers were equipped with an eye tracker which enabled us to have more accurate objective measures.
{"title":"Toward a better understanding of the communication cues involved in a human-robot object transfer","authors":"Mamoun Gharbi, Pierre-Vincent Paubel, A. Clodic, O. Carreras, R. Alami, J. Cellier","doi":"10.1109/ROMAN.2015.7333626","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333626","url":null,"abstract":"Handing-over objects to humans (or taking objects from them) is a key capability for a service robot. Humans are efficient and natural while performing this action and the purpose of the studies on this topic is to bring human-robot handovers to an acceptable, efficient and natural level. This paper deals with the cues that allow to make a handover look as natural as possible, and more precisely we focus on where the robot should look while performing it. In this context we propose a user study, involving 33 volunteers, who judged video sequences where they see either a human or a robot giving them an object. They were presented with different sequences where the agents (robot or human) have different gaze behaviours, and were asked to give their feeling about the sequence naturalness. In addition to this subjective measure, the volunteers were equipped with an eye tracker which enabled us to have more accurate objective measures.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132705570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333580
Takamitsu Matsubara, J. V. Miró, Daisuke Tanaka, James Poon, Kenji Sugimoto
This paper proposes an intelligent mobility aid framework aimed at mitigating the impact of cognitive and/or physical user deficiencies by performing suitable mobility assistance with minimum interference. To this end, a user action model using Gaussian Process Regression (GPR) is proposed to encapsulate the probabilistic and nonlinear relationships among user action, state of the environment and user intention. Moreover, exploiting the analytical tractability of the predictive distribution allows a sequential Bayesian process for user intention estimation to take place. The proposed scheme is validated on data obtained in an indoor setting with an instrumented robotic wheelchair augmented with sensorial feedback from the environment and user commands as well as proprioceptive information from the actual vehicle, achieving accuracy in near real-time of ~80%. The initial results are promising and indicating the suitability of the process to infer user driving behaviors within the context of ambulatory robots designed to provide assistance to users with mobility impairments while carrying out regular daily activities.
{"title":"Sequential intention estimation of a mobility aid user for intelligent navigational assistance","authors":"Takamitsu Matsubara, J. V. Miró, Daisuke Tanaka, James Poon, Kenji Sugimoto","doi":"10.1109/ROMAN.2015.7333580","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333580","url":null,"abstract":"This paper proposes an intelligent mobility aid framework aimed at mitigating the impact of cognitive and/or physical user deficiencies by performing suitable mobility assistance with minimum interference. To this end, a user action model using Gaussian Process Regression (GPR) is proposed to encapsulate the probabilistic and nonlinear relationships among user action, state of the environment and user intention. Moreover, exploiting the analytical tractability of the predictive distribution allows a sequential Bayesian process for user intention estimation to take place. The proposed scheme is validated on data obtained in an indoor setting with an instrumented robotic wheelchair augmented with sensorial feedback from the environment and user commands as well as proprioceptive information from the actual vehicle, achieving accuracy in near real-time of ~80%. The initial results are promising and indicating the suitability of the process to infer user driving behaviors within the context of ambulatory robots designed to provide assistance to users with mobility impairments while carrying out regular daily activities.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127220415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333561
C. Bartneck, Marius Soucy, Kevin Fleuret, E. B. Sandoval
HRI is a multi-disciplinary research field and integrating the range of expertise into a single project can be challenging. Enabling experts on human behavior to design fluent animations and behaviors for advanced robots is problematic, since the tools available for such robots are often in their prototype stage. We have built The Robot Engine (TRE) based on the Unity 3D Game Engine to control robots with Unity 3D. Unity 3D allows non-programmers to use a set of powerful animation and interaction design tools to visually program and animate robots. We review several animation techniques that are common in computer games and that could make the movements of robots more natural and convincing. We demonstrate the use of TRE with two different Arduino based robot platforms and believe that it can easily be extended for use with other robots. We further believe that this unconventional integration of technologies has the potential to fully bring the expertise of interaction designers into the process of advanced human-robot interaction projects.
{"title":"The robot engine — Making the unity 3D game engine work for HRI","authors":"C. Bartneck, Marius Soucy, Kevin Fleuret, E. B. Sandoval","doi":"10.1109/ROMAN.2015.7333561","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333561","url":null,"abstract":"HRI is a multi-disciplinary research field and integrating the range of expertise into a single project can be challenging. Enabling experts on human behavior to design fluent animations and behaviors for advanced robots is problematic, since the tools available for such robots are often in their prototype stage. We have built The Robot Engine (TRE) based on the Unity 3D Game Engine to control robots with Unity 3D. Unity 3D allows non-programmers to use a set of powerful animation and interaction design tools to visually program and animate robots. We review several animation techniques that are common in computer games and that could make the movements of robots more natural and convincing. We demonstrate the use of TRE with two different Arduino based robot platforms and believe that it can easily be extended for use with other robots. We further believe that this unconventional integration of technologies has the potential to fully bring the expertise of interaction designers into the process of advanced human-robot interaction projects.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128414468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333602
Oya Celiktutan, H. Gunes
In this paper, we analyse interactions with Nao, a small humanoid robot, from the viewpoint of human participants through an ego-centric camera placed on their forehead. We focus on human participants' and robot's personalities and their impact on the human-robot interactions. We automatically extract nonverbal cues (e.g., head movement) from first-person perspective and explore the relationship of nonverbal cues with participants' self-reported personality and their interaction experience. We generate two types of behaviours for the robot (i.e., extroverted vs. introverted) and examine how robot's personality and behaviour affect the findings. Significant correlations are obtained between the extroversion and agreeable-ness traits of the participants and the perceived enjoyment with the extroverted robot. Plausible relationships are also found between the measures of interaction experience and personality and the first-person vision features. We then use computational models to automatically predict the participants' personality traits from these features. Promising results are achieved for the traits of agreeableness, conscientiousness and extroversion.
{"title":"Computational analysis of human-robot interactions through first-person vision: Personality and interaction experience","authors":"Oya Celiktutan, H. Gunes","doi":"10.1109/ROMAN.2015.7333602","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333602","url":null,"abstract":"In this paper, we analyse interactions with Nao, a small humanoid robot, from the viewpoint of human participants through an ego-centric camera placed on their forehead. We focus on human participants' and robot's personalities and their impact on the human-robot interactions. We automatically extract nonverbal cues (e.g., head movement) from first-person perspective and explore the relationship of nonverbal cues with participants' self-reported personality and their interaction experience. We generate two types of behaviours for the robot (i.e., extroverted vs. introverted) and examine how robot's personality and behaviour affect the findings. Significant correlations are obtained between the extroversion and agreeable-ness traits of the participants and the perceived enjoyment with the extroverted robot. Plausible relationships are also found between the measures of interaction experience and personality and the first-person vision features. We then use computational models to automatically predict the participants' personality traits from these features. Promising results are achieved for the traits of agreeableness, conscientiousness and extroversion.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134185964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333628
R. Sagawa, Charles Malleson, M. Morisawa, K. Kaneko, F. Kanehiro, Y. Matsumoto, A. Hilton
For mobile robots, 3D acquisition is required to model the environment. Particularly for humanoid robots, a modeled environment is necessary to plan the walking control. This environment can include both static objects, such as a ground surface with obstacles, and dynamic objects, such as a person moving around the robot. This paper proposes a system for a robot to obtain a sufficiently accurate shape of the environment for walking on a ground surface with obstacles and a method to detect dynamic objects in the modeled environment, which is necessary for the robot to react to sudden changes in the scene. The 3D acquisition is achieved by a projector-camera system mounted on the robot head that uses a structured-light method to reconstruct the shapes of moving objects from a single frame. The acquired shapes are aligned and merged into a common coordinate system using the simultaneous localization and mapping method. Dynamic objects are detected as shapes that are inconsistent with the previous frames. Experiments were performed to evaluate the accuracy of the 3D acquisition and the robustness with regard to detecting dynamic objects when serving as the vision system of a humanoid robot.
{"title":"Modeling dynamic scenes by one-shot 3D acquisition system for moving humanoid robot","authors":"R. Sagawa, Charles Malleson, M. Morisawa, K. Kaneko, F. Kanehiro, Y. Matsumoto, A. Hilton","doi":"10.1109/ROMAN.2015.7333628","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333628","url":null,"abstract":"For mobile robots, 3D acquisition is required to model the environment. Particularly for humanoid robots, a modeled environment is necessary to plan the walking control. This environment can include both static objects, such as a ground surface with obstacles, and dynamic objects, such as a person moving around the robot. This paper proposes a system for a robot to obtain a sufficiently accurate shape of the environment for walking on a ground surface with obstacles and a method to detect dynamic objects in the modeled environment, which is necessary for the robot to react to sudden changes in the scene. The 3D acquisition is achieved by a projector-camera system mounted on the robot head that uses a structured-light method to reconstruct the shapes of moving objects from a single frame. The acquired shapes are aligned and merged into a common coordinate system using the simultaneous localization and mapping method. Dynamic objects are detected as shapes that are inconsistent with the previous frames. Experiments were performed to evaluate the accuracy of the 3D acquisition and the robustness with regard to detecting dynamic objects when serving as the vision system of a humanoid robot.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133484644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333575
I. Giannopulu, Tomio Watanabe
In the present interdisciplinary study, we have combined cognitive neuroscience knowledge, psychiatry and engineering knowledge with the aim to analyze emotion, language and un/consciousness in children aged 6 (n=20) and 9 (n=20) years via a listener-speaker communication. The speaker was always a child; the listener was a Human InterActor or a Robot InterActor, i.e.,. a small robot which reacts to speech expression by nodding only. Unconscious nonverbal emotional expression associated with physiological data (heart rate) as well as conscious process related to behavioral data (number of nouns and verbs in addition reported feelings) were considered. The results showed that 1) the heart rate was higher for children aged 6 years old than for children aged 9 years old when the InterActor was the robot; 2) the number of words (nouns and verbs) expressed by both age groups was higher when the InterActor was a human. It was lower for the children aged 6 years than for the children aged 9 years. Even if a difference of consciousness exists amongst the two groups, everything happens as if the InterActor Robot would allow children to elaborate a multivariate equation encoding and conceptualizing within their brain, and externalizing into unconscious nonverbal emotional behavior i.e., automatic activity. The Human InterActor would allow children to externalize the elaborated equation into conscious verbal behavior (words), i.e., controlled activity. Unconscious and conscious processes would not only depend on natural environments but also on artificial environments such as robots.
{"title":"Conscious/unconscious emotional dialogues in typical children in the presence of an InterActor Robot","authors":"I. Giannopulu, Tomio Watanabe","doi":"10.1109/ROMAN.2015.7333575","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333575","url":null,"abstract":"In the present interdisciplinary study, we have combined cognitive neuroscience knowledge, psychiatry and engineering knowledge with the aim to analyze emotion, language and un/consciousness in children aged 6 (n=20) and 9 (n=20) years via a listener-speaker communication. The speaker was always a child; the listener was a Human InterActor or a Robot InterActor, i.e.,. a small robot which reacts to speech expression by nodding only. Unconscious nonverbal emotional expression associated with physiological data (heart rate) as well as conscious process related to behavioral data (number of nouns and verbs in addition reported feelings) were considered. The results showed that 1) the heart rate was higher for children aged 6 years old than for children aged 9 years old when the InterActor was the robot; 2) the number of words (nouns and verbs) expressed by both age groups was higher when the InterActor was a human. It was lower for the children aged 6 years than for the children aged 9 years. Even if a difference of consciousness exists amongst the two groups, everything happens as if the InterActor Robot would allow children to elaborate a multivariate equation encoding and conceptualizing within their brain, and externalizing into unconscious nonverbal emotional behavior i.e., automatic activity. The Human InterActor would allow children to externalize the elaborated equation into conscious verbal behavior (words), i.e., controlled activity. Unconscious and conscious processes would not only depend on natural environments but also on artificial environments such as robots.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121043249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333654
P. Bello, John Licato, S. Bringsjord
The protean word `autonomous' has gained broad currency as a descriptive adjective for AI research projects, robotic and otherwise. Depending upon context, `autonomous' at present connotes anything from a shallow, purely reactive system to a sophisticated cognitive architecture reflective of much of human cognition; hence the term fails to pick out any specific set of constitutive functionality. However, philosophers and ethicists have something relatively well-defined in mind when they talk about the idea of autonomy. For them, an autonomous agent is often by definition potentially morally responsible for its actions. Moreover, as a prerequisite to correct ascription of `autonomous,' a certain capacity to choose freely is assumed - even if this freedom is understood to be semi-constrained by societal conventions, moral norms, and the like.
{"title":"Constraints on freely chosen action for moral robots: Consciousness and control","authors":"P. Bello, John Licato, S. Bringsjord","doi":"10.1109/ROMAN.2015.7333654","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333654","url":null,"abstract":"The protean word `autonomous' has gained broad currency as a descriptive adjective for AI research projects, robotic and otherwise. Depending upon context, `autonomous' at present connotes anything from a shallow, purely reactive system to a sophisticated cognitive architecture reflective of much of human cognition; hence the term fails to pick out any specific set of constitutive functionality. However, philosophers and ethicists have something relatively well-defined in mind when they talk about the idea of autonomy. For them, an autonomous agent is often by definition potentially morally responsible for its actions. Moreover, as a prerequisite to correct ascription of `autonomous,' a certain capacity to choose freely is assumed - even if this freedom is understood to be semi-constrained by societal conventions, moral norms, and the like.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128813235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333608
Edgar Flores, S. Fels
We present a novel eye-camera positioning system with four degrees-of-freedom (DOF). The system has been designed to emulate human eye movements, including saccades, for anatomically accurate androids. The architecture of our system is similar to that of a universal joint in that a hollowed sphere (the eyeball), hosting a miniature CMOS color camera, takes the part of the cross shaft that connects a pair of hinges that are oriented at 90 degrees of each other. This concept allows the motors to remain static, enabling placing them in multiple configurations during the mechanical design stage facilitating the inclusion of other robotic parts into the robots head. Based on our evaluations, the robotic eye-camera has been shown to be suitable for perception experiments that require human-like eye motion.
{"title":"A novel 4 DOF eye-camera positioning system for Androids","authors":"Edgar Flores, S. Fels","doi":"10.1109/ROMAN.2015.7333608","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333608","url":null,"abstract":"We present a novel eye-camera positioning system with four degrees-of-freedom (DOF). The system has been designed to emulate human eye movements, including saccades, for anatomically accurate androids. The architecture of our system is similar to that of a universal joint in that a hollowed sphere (the eyeball), hosting a miniature CMOS color camera, takes the part of the cross shaft that connects a pair of hinges that are oriented at 90 degrees of each other. This concept allows the motors to remain static, enabling placing them in multiple configurations during the mechanical design stage facilitating the inclusion of other robotic parts into the robots head. Based on our evaluations, the robotic eye-camera has been shown to be suitable for perception experiments that require human-like eye motion.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117208659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}