首页 > 最新文献

2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)最新文献

英文 中文
A case study of an automatic volume control interface for a telepresence system 远程呈现系统自动音量控制界面的案例研究
Masaaki Takahashi, Masa Ogata, M. Imai, Keisuke Nakamura, K. Nakadai
The study of the telepresence robot as a tool for telecommunication from a remote location is attracting a considerable amount of attention. However, the problem arises that a telepresence robot system does not allow the volume of the user's utterance to be adjusted precisely, because it does not consider varying conditions in the sound environment, such as noise. In addition, when talking with several people in remote location, the user would like to be able to change the speaker volume freely according to the situation. In a previous study, a telepresence robot was proposed that has a function that automatically regulates the volume of the user's utterance. However, the manner in which the user exploits this function in a practical situation needs to be investigated. We propose a telepresence conversation robot system called “TeleCoBot.” TeleCoBot includes an operator's user interface, through which the volume of the user's utterance can be automatically regulated according to the distance between the robot and the conversation partner and the noise level in the robot's environment. We conducted a case study, in which the participants played a game using TeleCoBot's interface. The results of the study reveal the manner in which the participants used TeleCoBot and the additional factors that the system requires.
远程呈现机器人作为远程通信工具的研究引起了相当多的关注。然而,问题出现了,远程呈现机器人系统不允许用户说话的音量精确调整,因为它不考虑声音环境中的变化条件,如噪音。此外,当与几个人在远程位置交谈时,用户希望能够根据情况自由改变扬声器的音量。在之前的一项研究中,有人提出了一种远程呈现机器人,它具有自动调节用户说话音量的功能。但是,需要调查用户在实际情况中使用此功能的方式。我们提出了一种远程呈现对话机器人系统,叫做“TeleCoBot”。TeleCoBot包括一个操作员的用户界面,通过该界面,用户可以根据机器人与对话伙伴之间的距离以及机器人环境中的噪音水平自动调节用户说话的音量。我们进行了一个案例研究,参与者使用TeleCoBot的界面玩一个游戏。研究结果揭示了参与者使用远程机器人的方式和系统所需的其他因素。
{"title":"A case study of an automatic volume control interface for a telepresence system","authors":"Masaaki Takahashi, Masa Ogata, M. Imai, Keisuke Nakamura, K. Nakadai","doi":"10.1109/ROMAN.2015.7333605","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333605","url":null,"abstract":"The study of the telepresence robot as a tool for telecommunication from a remote location is attracting a considerable amount of attention. However, the problem arises that a telepresence robot system does not allow the volume of the user's utterance to be adjusted precisely, because it does not consider varying conditions in the sound environment, such as noise. In addition, when talking with several people in remote location, the user would like to be able to change the speaker volume freely according to the situation. In a previous study, a telepresence robot was proposed that has a function that automatically regulates the volume of the user's utterance. However, the manner in which the user exploits this function in a practical situation needs to be investigated. We propose a telepresence conversation robot system called “TeleCoBot.” TeleCoBot includes an operator's user interface, through which the volume of the user's utterance can be automatically regulated according to the distance between the robot and the conversation partner and the noise level in the robot's environment. We conducted a case study, in which the participants played a game using TeleCoBot's interface. The results of the study reveal the manner in which the participants used TeleCoBot and the additional factors that the system requires.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125884777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Effects of interaction and appearance on subjective impression of robots 交互和外观对机器人主观印象的影响
Keisuke Nonomura, K. Terada, A. Ito, S. Yamada
Human-interactive robots are assessed according to various factors, such as behavior, appearance, and quality of interaction. In the present study, we investigated the hypothesis that impressions of an unattractive robot will be improved by emotional interaction with physical touch with the robot. An experiment with human subjects confirmed that the evaluations of the intimacy factor of unattractive robots were improved after two minutes of physical and emotional interaction with such robots.
人机交互机器人是根据各种因素进行评估的,例如行为、外观和交互质量。在本研究中,我们调查了一个假设,即通过情感互动和与机器人的身体接触,对一个不吸引人的机器人的印象会得到改善。一项以人类为对象的实验证实,在与长相不佳的机器人进行两分钟的身体和情感互动后,对亲密度因素的评估有所提高。
{"title":"Effects of interaction and appearance on subjective impression of robots","authors":"Keisuke Nonomura, K. Terada, A. Ito, S. Yamada","doi":"10.1109/ROMAN.2015.7333577","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333577","url":null,"abstract":"Human-interactive robots are assessed according to various factors, such as behavior, appearance, and quality of interaction. In the present study, we investigated the hypothesis that impressions of an unattractive robot will be improved by emotional interaction with physical touch with the robot. An experiment with human subjects confirmed that the evaluations of the intimacy factor of unattractive robots were improved after two minutes of physical and emotional interaction with such robots.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127656715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward a better understanding of the communication cues involved in a human-robot object transfer 为了更好地理解人-机器人物体转移中涉及的交流线索
Mamoun Gharbi, Pierre-Vincent Paubel, A. Clodic, O. Carreras, R. Alami, J. Cellier
Handing-over objects to humans (or taking objects from them) is a key capability for a service robot. Humans are efficient and natural while performing this action and the purpose of the studies on this topic is to bring human-robot handovers to an acceptable, efficient and natural level. This paper deals with the cues that allow to make a handover look as natural as possible, and more precisely we focus on where the robot should look while performing it. In this context we propose a user study, involving 33 volunteers, who judged video sequences where they see either a human or a robot giving them an object. They were presented with different sequences where the agents (robot or human) have different gaze behaviours, and were asked to give their feeling about the sequence naturalness. In addition to this subjective measure, the volunteers were equipped with an eye tracker which enabled us to have more accurate objective measures.
将物品交给人类(或从人类手中拿走物品)是服务机器人的一项关键能力。人类在进行这一动作时是高效和自然的,本课题研究的目的是使人与机器人的交接达到可接受的、高效的和自然的水平。这篇论文处理的线索使交接看起来尽可能自然,更准确地说,我们关注的是机器人在执行交接时应该看的地方。在此背景下,我们提出了一项用户研究,涉及33名志愿者,他们判断视频序列,他们看到一个人或一个机器人给他们一个物体。他们被展示了不同的序列,其中代理(机器人或人类)有不同的凝视行为,并被要求给出他们对序列自然性的感觉。除了这种主观测量之外,志愿者们还配备了眼动仪,使我们能够有更准确的客观测量。
{"title":"Toward a better understanding of the communication cues involved in a human-robot object transfer","authors":"Mamoun Gharbi, Pierre-Vincent Paubel, A. Clodic, O. Carreras, R. Alami, J. Cellier","doi":"10.1109/ROMAN.2015.7333626","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333626","url":null,"abstract":"Handing-over objects to humans (or taking objects from them) is a key capability for a service robot. Humans are efficient and natural while performing this action and the purpose of the studies on this topic is to bring human-robot handovers to an acceptable, efficient and natural level. This paper deals with the cues that allow to make a handover look as natural as possible, and more precisely we focus on where the robot should look while performing it. In this context we propose a user study, involving 33 volunteers, who judged video sequences where they see either a human or a robot giving them an object. They were presented with different sequences where the agents (robot or human) have different gaze behaviours, and were asked to give their feeling about the sequence naturalness. In addition to this subjective measure, the volunteers were equipped with an eye tracker which enabled us to have more accurate objective measures.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132705570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Sequential intention estimation of a mobility aid user for intelligent navigational assistance 智能导航辅助移动辅助用户的顺序意图估计
Takamitsu Matsubara, J. V. Miró, Daisuke Tanaka, James Poon, Kenji Sugimoto
This paper proposes an intelligent mobility aid framework aimed at mitigating the impact of cognitive and/or physical user deficiencies by performing suitable mobility assistance with minimum interference. To this end, a user action model using Gaussian Process Regression (GPR) is proposed to encapsulate the probabilistic and nonlinear relationships among user action, state of the environment and user intention. Moreover, exploiting the analytical tractability of the predictive distribution allows a sequential Bayesian process for user intention estimation to take place. The proposed scheme is validated on data obtained in an indoor setting with an instrumented robotic wheelchair augmented with sensorial feedback from the environment and user commands as well as proprioceptive information from the actual vehicle, achieving accuracy in near real-time of ~80%. The initial results are promising and indicating the suitability of the process to infer user driving behaviors within the context of ambulatory robots designed to provide assistance to users with mobility impairments while carrying out regular daily activities.
本文提出了一种智能移动辅助框架,旨在通过在最小干扰下执行适当的移动辅助来减轻认知和/或物理用户缺陷的影响。为此,提出了一种基于高斯过程回归(GPR)的用户行为模型,该模型封装了用户行为、环境状态和用户意图之间的概率和非线性关系。此外,利用预测分布的分析可追溯性,可以进行用户意图估计的顺序贝叶斯过程。该方案在室内环境中使用仪器化机器人轮椅获得的数据进行了验证,该轮椅增强了来自环境和用户命令的感官反馈以及来自实际车辆的本体感受信息,实现了接近实时的准确率~80%。最初的结果是有希望的,并且表明了在动态机器人的背景下推断用户驾驶行为的过程的适用性,该机器人旨在为行动不便的用户提供帮助,同时进行常规的日常活动。
{"title":"Sequential intention estimation of a mobility aid user for intelligent navigational assistance","authors":"Takamitsu Matsubara, J. V. Miró, Daisuke Tanaka, James Poon, Kenji Sugimoto","doi":"10.1109/ROMAN.2015.7333580","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333580","url":null,"abstract":"This paper proposes an intelligent mobility aid framework aimed at mitigating the impact of cognitive and/or physical user deficiencies by performing suitable mobility assistance with minimum interference. To this end, a user action model using Gaussian Process Regression (GPR) is proposed to encapsulate the probabilistic and nonlinear relationships among user action, state of the environment and user intention. Moreover, exploiting the analytical tractability of the predictive distribution allows a sequential Bayesian process for user intention estimation to take place. The proposed scheme is validated on data obtained in an indoor setting with an instrumented robotic wheelchair augmented with sensorial feedback from the environment and user commands as well as proprioceptive information from the actual vehicle, achieving accuracy in near real-time of ~80%. The initial results are promising and indicating the suitability of the process to infer user driving behaviors within the context of ambulatory robots designed to provide assistance to users with mobility impairments while carrying out regular daily activities.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127220415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
The robot engine — Making the unity 3D game engine work for HRI 机器人引擎-使unity 3D游戏引擎为HRI工作
C. Bartneck, Marius Soucy, Kevin Fleuret, E. B. Sandoval
HRI is a multi-disciplinary research field and integrating the range of expertise into a single project can be challenging. Enabling experts on human behavior to design fluent animations and behaviors for advanced robots is problematic, since the tools available for such robots are often in their prototype stage. We have built The Robot Engine (TRE) based on the Unity 3D Game Engine to control robots with Unity 3D. Unity 3D allows non-programmers to use a set of powerful animation and interaction design tools to visually program and animate robots. We review several animation techniques that are common in computer games and that could make the movements of robots more natural and convincing. We demonstrate the use of TRE with two different Arduino based robot platforms and believe that it can easily be extended for use with other robots. We further believe that this unconventional integration of technologies has the potential to fully bring the expertise of interaction designers into the process of advanced human-robot interaction projects.
人力资源研究所是一个多学科的研究领域,将各种专业知识整合到一个项目中可能具有挑战性。使人类行为专家能够为高级机器人设计流畅的动画和行为是有问题的,因为用于此类机器人的工具通常处于原型阶段。我们已经建立了机器人引擎(TRE)基于Unity 3D游戏引擎来控制机器人与Unity 3D。Unity 3D允许非程序员使用一套强大的动画和交互设计工具来可视化地编程和动画机器人。我们回顾了电脑游戏中常见的几种动画技术,这些技术可以使机器人的动作更加自然和令人信服。我们演示了在两种不同的基于Arduino的机器人平台上使用TRE,并相信它可以很容易地扩展到其他机器人上使用。我们进一步相信,这种非常规的技术集成有潜力将交互设计师的专业知识充分引入先进的人机交互项目的过程中。
{"title":"The robot engine — Making the unity 3D game engine work for HRI","authors":"C. Bartneck, Marius Soucy, Kevin Fleuret, E. B. Sandoval","doi":"10.1109/ROMAN.2015.7333561","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333561","url":null,"abstract":"HRI is a multi-disciplinary research field and integrating the range of expertise into a single project can be challenging. Enabling experts on human behavior to design fluent animations and behaviors for advanced robots is problematic, since the tools available for such robots are often in their prototype stage. We have built The Robot Engine (TRE) based on the Unity 3D Game Engine to control robots with Unity 3D. Unity 3D allows non-programmers to use a set of powerful animation and interaction design tools to visually program and animate robots. We review several animation techniques that are common in computer games and that could make the movements of robots more natural and convincing. We demonstrate the use of TRE with two different Arduino based robot platforms and believe that it can easily be extended for use with other robots. We further believe that this unconventional integration of technologies has the potential to fully bring the expertise of interaction designers into the process of advanced human-robot interaction projects.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128414468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Computational analysis of human-robot interactions through first-person vision: Personality and interaction experience 基于第一人称视觉的人机交互计算分析:个性与交互体验
Oya Celiktutan, H. Gunes
In this paper, we analyse interactions with Nao, a small humanoid robot, from the viewpoint of human participants through an ego-centric camera placed on their forehead. We focus on human participants' and robot's personalities and their impact on the human-robot interactions. We automatically extract nonverbal cues (e.g., head movement) from first-person perspective and explore the relationship of nonverbal cues with participants' self-reported personality and their interaction experience. We generate two types of behaviours for the robot (i.e., extroverted vs. introverted) and examine how robot's personality and behaviour affect the findings. Significant correlations are obtained between the extroversion and agreeable-ness traits of the participants and the perceived enjoyment with the extroverted robot. Plausible relationships are also found between the measures of interaction experience and personality and the first-person vision features. We then use computational models to automatically predict the participants' personality traits from these features. Promising results are achieved for the traits of agreeableness, conscientiousness and extroversion.
在本文中,我们通过放置在人类参与者额头上的以自我为中心的摄像头,从人类参与者的角度分析了与小型人形机器人Nao的互动。我们关注人类参与者和机器人的个性及其对人机交互的影响。我们从第一人称视角自动提取非语言线索(如头部运动),并探索非语言线索与参与者自我报告的人格和互动经验的关系。我们为机器人生成了两种类型的行为(即外向与内向),并研究了机器人的个性和行为如何影响研究结果。被试的外向性和亲和性特征与与外向型机器人的感知享受之间存在显著相关。在互动经验和个性的测量与第一人称视觉特征之间也发现了似是而非的关系。然后,我们使用计算模型从这些特征中自动预测参与者的性格特征。在亲和性、严谨性和外向性方面取得了可喜的结果。
{"title":"Computational analysis of human-robot interactions through first-person vision: Personality and interaction experience","authors":"Oya Celiktutan, H. Gunes","doi":"10.1109/ROMAN.2015.7333602","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333602","url":null,"abstract":"In this paper, we analyse interactions with Nao, a small humanoid robot, from the viewpoint of human participants through an ego-centric camera placed on their forehead. We focus on human participants' and robot's personalities and their impact on the human-robot interactions. We automatically extract nonverbal cues (e.g., head movement) from first-person perspective and explore the relationship of nonverbal cues with participants' self-reported personality and their interaction experience. We generate two types of behaviours for the robot (i.e., extroverted vs. introverted) and examine how robot's personality and behaviour affect the findings. Significant correlations are obtained between the extroversion and agreeable-ness traits of the participants and the perceived enjoyment with the extroverted robot. Plausible relationships are also found between the measures of interaction experience and personality and the first-person vision features. We then use computational models to automatically predict the participants' personality traits from these features. Promising results are achieved for the traits of agreeableness, conscientiousness and extroversion.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134185964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Modeling dynamic scenes by one-shot 3D acquisition system for moving humanoid robot 基于单镜头三维采集系统的运动人形机器人动态场景建模
R. Sagawa, Charles Malleson, M. Morisawa, K. Kaneko, F. Kanehiro, Y. Matsumoto, A. Hilton
For mobile robots, 3D acquisition is required to model the environment. Particularly for humanoid robots, a modeled environment is necessary to plan the walking control. This environment can include both static objects, such as a ground surface with obstacles, and dynamic objects, such as a person moving around the robot. This paper proposes a system for a robot to obtain a sufficiently accurate shape of the environment for walking on a ground surface with obstacles and a method to detect dynamic objects in the modeled environment, which is necessary for the robot to react to sudden changes in the scene. The 3D acquisition is achieved by a projector-camera system mounted on the robot head that uses a structured-light method to reconstruct the shapes of moving objects from a single frame. The acquired shapes are aligned and merged into a common coordinate system using the simultaneous localization and mapping method. Dynamic objects are detected as shapes that are inconsistent with the previous frames. Experiments were performed to evaluate the accuracy of the 3D acquisition and the robustness with regard to detecting dynamic objects when serving as the vision system of a humanoid robot.
对于移动机器人,需要三维采集来对环境进行建模。特别是对于类人机器人,需要一个建模的环境来规划其行走控制。这种环境既可以包括静态对象,比如有障碍物的地面,也可以包括动态对象,比如在机器人周围移动的人。本文提出了一种机器人在有障碍物的地面上获得足够精确的环境形状的系统,以及一种机器人在建模环境中检测动态物体的方法,这是机器人对场景突然变化做出反应所必需的。3D采集是通过安装在机器人头上的投影相机系统实现的,该系统使用结构光方法从单帧重建运动物体的形状。利用同步定位和映射的方法,将获取的形状对齐并合并到一个共同的坐标系中。动态对象被检测为与前一帧不一致的形状。通过实验验证了该系统作为仿人机器人视觉系统的三维采集精度和动态目标检测的鲁棒性。
{"title":"Modeling dynamic scenes by one-shot 3D acquisition system for moving humanoid robot","authors":"R. Sagawa, Charles Malleson, M. Morisawa, K. Kaneko, F. Kanehiro, Y. Matsumoto, A. Hilton","doi":"10.1109/ROMAN.2015.7333628","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333628","url":null,"abstract":"For mobile robots, 3D acquisition is required to model the environment. Particularly for humanoid robots, a modeled environment is necessary to plan the walking control. This environment can include both static objects, such as a ground surface with obstacles, and dynamic objects, such as a person moving around the robot. This paper proposes a system for a robot to obtain a sufficiently accurate shape of the environment for walking on a ground surface with obstacles and a method to detect dynamic objects in the modeled environment, which is necessary for the robot to react to sudden changes in the scene. The 3D acquisition is achieved by a projector-camera system mounted on the robot head that uses a structured-light method to reconstruct the shapes of moving objects from a single frame. The acquired shapes are aligned and merged into a common coordinate system using the simultaneous localization and mapping method. Dynamic objects are detected as shapes that are inconsistent with the previous frames. Experiments were performed to evaluate the accuracy of the 3D acquisition and the robustness with regard to detecting dynamic objects when serving as the vision system of a humanoid robot.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133484644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conscious/unconscious emotional dialogues in typical children in the presence of an InterActor Robot 典型儿童在互动机器人面前的有意识/无意识情感对话
I. Giannopulu, Tomio Watanabe
In the present interdisciplinary study, we have combined cognitive neuroscience knowledge, psychiatry and engineering knowledge with the aim to analyze emotion, language and un/consciousness in children aged 6 (n=20) and 9 (n=20) years via a listener-speaker communication. The speaker was always a child; the listener was a Human InterActor or a Robot InterActor, i.e.,. a small robot which reacts to speech expression by nodding only. Unconscious nonverbal emotional expression associated with physiological data (heart rate) as well as conscious process related to behavioral data (number of nouns and verbs in addition reported feelings) were considered. The results showed that 1) the heart rate was higher for children aged 6 years old than for children aged 9 years old when the InterActor was the robot; 2) the number of words (nouns and verbs) expressed by both age groups was higher when the InterActor was a human. It was lower for the children aged 6 years than for the children aged 9 years. Even if a difference of consciousness exists amongst the two groups, everything happens as if the InterActor Robot would allow children to elaborate a multivariate equation encoding and conceptualizing within their brain, and externalizing into unconscious nonverbal emotional behavior i.e., automatic activity. The Human InterActor would allow children to externalize the elaborated equation into conscious verbal behavior (words), i.e., controlled activity. Unconscious and conscious processes would not only depend on natural environments but also on artificial environments such as robots.
在本跨学科研究中,我们结合了认知神经科学知识、精神病学知识和工程学知识,旨在通过听者-说话者交流分析6岁(n=20)和9岁(n=20)儿童的情绪、语言和无意识/意识。说话的总是一个孩子;听众是人类交互者或机器人交互者,即。这是一种小型机器人,对语音表达只会点头。考虑了与生理数据(心率)相关的无意识非语言情感表达以及与行为数据(名词和动词的数量以及报告的感受)相关的有意识过程。结果表明:1)交互者为机器人时,6岁儿童的心率高于9岁儿童;2)当互动者为人类时,两个年龄组表达的单词(名词和动词)数量都更高。6岁的孩子比9岁的孩子要低。即使两组之间存在意识差异,互动机器人也会让孩子们在大脑中精心设计一个多元方程编码和概念化,并将其外化为无意识的非语言情感行为,即自动活动。人类互动者将允许儿童将精心设计的方程式外化为有意识的言语行为(单词),即受控活动。无意识和有意识的过程不仅依赖于自然环境,也依赖于人工环境,如机器人。
{"title":"Conscious/unconscious emotional dialogues in typical children in the presence of an InterActor Robot","authors":"I. Giannopulu, Tomio Watanabe","doi":"10.1109/ROMAN.2015.7333575","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333575","url":null,"abstract":"In the present interdisciplinary study, we have combined cognitive neuroscience knowledge, psychiatry and engineering knowledge with the aim to analyze emotion, language and un/consciousness in children aged 6 (n=20) and 9 (n=20) years via a listener-speaker communication. The speaker was always a child; the listener was a Human InterActor or a Robot InterActor, i.e.,. a small robot which reacts to speech expression by nodding only. Unconscious nonverbal emotional expression associated with physiological data (heart rate) as well as conscious process related to behavioral data (number of nouns and verbs in addition reported feelings) were considered. The results showed that 1) the heart rate was higher for children aged 6 years old than for children aged 9 years old when the InterActor was the robot; 2) the number of words (nouns and verbs) expressed by both age groups was higher when the InterActor was a human. It was lower for the children aged 6 years than for the children aged 9 years. Even if a difference of consciousness exists amongst the two groups, everything happens as if the InterActor Robot would allow children to elaborate a multivariate equation encoding and conceptualizing within their brain, and externalizing into unconscious nonverbal emotional behavior i.e., automatic activity. The Human InterActor would allow children to externalize the elaborated equation into conscious verbal behavior (words), i.e., controlled activity. Unconscious and conscious processes would not only depend on natural environments but also on artificial environments such as robots.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121043249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Constraints on freely chosen action for moral robots: Consciousness and control 道德机器人自由选择行为的约束:意识和控制
P. Bello, John Licato, S. Bringsjord
The protean word `autonomous' has gained broad currency as a descriptive adjective for AI research projects, robotic and otherwise. Depending upon context, `autonomous' at present connotes anything from a shallow, purely reactive system to a sophisticated cognitive architecture reflective of much of human cognition; hence the term fails to pick out any specific set of constitutive functionality. However, philosophers and ethicists have something relatively well-defined in mind when they talk about the idea of autonomy. For them, an autonomous agent is often by definition potentially morally responsible for its actions. Moreover, as a prerequisite to correct ascription of `autonomous,' a certain capacity to choose freely is assumed - even if this freedom is understood to be semi-constrained by societal conventions, moral norms, and the like.
“自主”这个千变万化的词,作为人工智能研究项目、机器人和其他项目的描述性形容词,已经得到了广泛的应用。根据上下文的不同,“自治”目前意味着从肤浅的、纯粹的反应系统到反映人类认知的复杂认知架构的任何东西;因此,这一术语不能挑出任何特定的本构功能集。然而,哲学家和伦理学家在谈论自治的概念时,脑子里有一些相对明确的东西。对他们来说,根据定义,自主主体通常对其行为负有潜在的道德责任。此外,作为正确定义“自主”的先决条件,某种自由选择的能力被假定——即使这种自由被理解为受到社会习俗、道德规范等方面的半约束。
{"title":"Constraints on freely chosen action for moral robots: Consciousness and control","authors":"P. Bello, John Licato, S. Bringsjord","doi":"10.1109/ROMAN.2015.7333654","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333654","url":null,"abstract":"The protean word `autonomous' has gained broad currency as a descriptive adjective for AI research projects, robotic and otherwise. Depending upon context, `autonomous' at present connotes anything from a shallow, purely reactive system to a sophisticated cognitive architecture reflective of much of human cognition; hence the term fails to pick out any specific set of constitutive functionality. However, philosophers and ethicists have something relatively well-defined in mind when they talk about the idea of autonomy. For them, an autonomous agent is often by definition potentially morally responsible for its actions. Moreover, as a prerequisite to correct ascription of `autonomous,' a certain capacity to choose freely is assumed - even if this freedom is understood to be semi-constrained by societal conventions, moral norms, and the like.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128813235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A novel 4 DOF eye-camera positioning system for Androids 一种新颖的4 DOF眼相机定位系统
Edgar Flores, S. Fels
We present a novel eye-camera positioning system with four degrees-of-freedom (DOF). The system has been designed to emulate human eye movements, including saccades, for anatomically accurate androids. The architecture of our system is similar to that of a universal joint in that a hollowed sphere (the eyeball), hosting a miniature CMOS color camera, takes the part of the cross shaft that connects a pair of hinges that are oriented at 90 degrees of each other. This concept allows the motors to remain static, enabling placing them in multiple configurations during the mechanical design stage facilitating the inclusion of other robotic parts into the robots head. Based on our evaluations, the robotic eye-camera has been shown to be suitable for perception experiments that require human-like eye motion.
提出了一种新颖的四自由度眼相机定位系统。该系统被设计为模仿人类的眼球运动,包括扫视,用于解剖学上精确的机器人。我们的系统架构类似于万向节,一个中空的球体(眼球),承载一个微型CMOS彩色相机,占据连接一对铰链的十字轴的一部分,铰链彼此成90度。这一概念允许电机保持静态,从而在机械设计阶段将其放置在多种配置中,从而便于将其他机器人部件包含在机器人头部中。根据我们的评估,机器人眼摄像头已被证明适合于需要类似人类眼球运动的感知实验。
{"title":"A novel 4 DOF eye-camera positioning system for Androids","authors":"Edgar Flores, S. Fels","doi":"10.1109/ROMAN.2015.7333608","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333608","url":null,"abstract":"We present a novel eye-camera positioning system with four degrees-of-freedom (DOF). The system has been designed to emulate human eye movements, including saccades, for anatomically accurate androids. The architecture of our system is similar to that of a universal joint in that a hollowed sphere (the eyeball), hosting a miniature CMOS color camera, takes the part of the cross shaft that connects a pair of hinges that are oriented at 90 degrees of each other. This concept allows the motors to remain static, enabling placing them in multiple configurations during the mechanical design stage facilitating the inclusion of other robotic parts into the robots head. Based on our evaluations, the robotic eye-camera has been shown to be suitable for perception experiments that require human-like eye motion.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117208659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1