首页 > 最新文献

Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication最新文献

英文 中文
Bi-directional human machine interface via direct neural connection 双向人机界面通过直接神经连接
M. Gasson, B. Hutt, I. Goodhew, P. Kyberd, K. Warwick
This paper presents an application study into the use of a bi-directional link with the human nervous system by means of an implant, positioned through neurosurgery. Various applications are described including the interaction of neural signals with an articulated hand, a group of cooperative autonomous robots and to control the movement of a mobile platform. The microelectrode array implant itself is described in detail. Consideration is given to a wider range of possible robot mechanisms, which could interact with the human nervous system through the same technique.
本文介绍了一项应用研究,利用通过神经外科定位的植入物与人类神经系统进行双向连接。描述了各种应用,包括神经信号与关节手的相互作用,一组协作自主机器人和控制移动平台的运动。详细描述了微电极阵列植入物本身。考虑到更广泛的可能的机器人机制,可以通过相同的技术与人类神经系统相互作用。
{"title":"Bi-directional human machine interface via direct neural connection","authors":"M. Gasson, B. Hutt, I. Goodhew, P. Kyberd, K. Warwick","doi":"10.1109/ROMAN.2002.1045633","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045633","url":null,"abstract":"This paper presents an application study into the use of a bi-directional link with the human nervous system by means of an implant, positioned through neurosurgery. Various applications are described including the interaction of neural signals with an articulated hand, a group of cooperative autonomous robots and to control the movement of a mobile platform. The microelectrode array implant itself is described in detail. Consideration is given to a wider range of possible robot mechanisms, which could interact with the human nervous system through the same technique.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114801383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Absolute stable haptic interaction with the isotropic force display 绝对稳定的触觉相互作用与各向同性力显示
A. Frisoli, M. Bergamasco
This paper presents different conditions for unconditional stability of the interaction of human operators with haptic interface systems. Criteria for the unconditional stability have been theoretically derived and experimentally assessed on the isotropic force display. A good match has been observed between theoretical predictions and real performance.
本文给出了人类操作者与触觉界面系统交互无条件稳定的不同条件。理论推导了无条件稳定的判据,并在各向同性力显示上进行了实验评估。在理论预测和实际表现之间观察到很好的匹配。
{"title":"Absolute stable haptic interaction with the isotropic force display","authors":"A. Frisoli, M. Bergamasco","doi":"10.1109/ROMAN.2002.1045610","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045610","url":null,"abstract":"This paper presents different conditions for unconditional stability of the interaction of human operators with haptic interface systems. Criteria for the unconditional stability have been theoretically derived and experimentally assessed on the isotropic force display. A good match has been observed between theoretical predictions and real performance.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117352964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fractal representation of image feature associated with maneuvering affordance 与机动能力相关的图像特征分形表示
K. Kamejima
One of the essential capabilities of 'real world intelligence', whether developed naturally or designed artificially, is to generate feasible operations based on innate belief in real world. As cognitive basis of the real world intelligence, visual perception organizes randomly distributed image features into environment features: well structured visibles available as consistent cues to subsequent decisions. Such phenomenal supervenience to reality plays a crucial role in implementing cooperative systems intended for field automation, vehicle-roadway networking, community restoration from disaster, and interactive education, e.g. in generating consistent decisions, partial knowledge of the environment should be adapted intentionally to encountered scene prior to the comprehension of the situations. Such selfreference structure, however, yields serious contradiction in understanding natural perception mechanisms and/or implementing artificial vision systems. In this paper directional Fourier transform was applied to extract maneuvering affordance in noisy imagery. By identifying the brightness distribution of observed patterns with the invariant measure of unknown fractal attractor, noise levels were estimated for extracting affordance pattern. The detectability of affordance patterns has been verified through experimental studies.
“现实世界智能”的基本能力之一,无论是自然发展还是人为设计,都是基于对现实世界的固有信念产生可行的操作。作为现实世界智能的认知基础,视觉感知将随机分布的图像特征组织成环境特征:结构良好的可见性可作为后续决策的一致线索。这种对现实的显著监督在实施用于现场自动化、车辆-道路网络、灾难后社区恢复和互动教育的合作系统中起着至关重要的作用,例如,在产生一致的决策时,应该在理解情况之前有意地将部分环境知识适应所遇到的场景。然而,这种自我参照结构在理解自然感知机制和/或实现人工视觉系统方面产生了严重的矛盾。本文将定向傅里叶变换应用于噪声图像的机动特征提取。利用未知分形吸引子的不变测度识别观测模式的亮度分布,估计噪声水平,提取提供模式。通过实验研究验证了功能模式的可检测性。
{"title":"Fractal representation of image feature associated with maneuvering affordance","authors":"K. Kamejima","doi":"10.1109/ROMAN.2002.1045672","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045672","url":null,"abstract":"One of the essential capabilities of 'real world intelligence', whether developed naturally or designed artificially, is to generate feasible operations based on innate belief in real world. As cognitive basis of the real world intelligence, visual perception organizes randomly distributed image features into environment features: well structured visibles available as consistent cues to subsequent decisions. Such phenomenal supervenience to reality plays a crucial role in implementing cooperative systems intended for field automation, vehicle-roadway networking, community restoration from disaster, and interactive education, e.g. in generating consistent decisions, partial knowledge of the environment should be adapted intentionally to encountered scene prior to the comprehension of the situations. Such selfreference structure, however, yields serious contradiction in understanding natural perception mechanisms and/or implementing artificial vision systems. In this paper directional Fourier transform was applied to extract maneuvering affordance in noisy imagery. By identifying the brightness distribution of observed patterns with the invariant measure of unknown fractal attractor, noise levels were estimated for extracting affordance pattern. The detectability of affordance patterns has been verified through experimental studies.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121299702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improving the man-machine interface through the analysis of expressiveness in human movement 通过对人体动作表现力的分析,改进人机界面
A. Camurri, P. Coletta, B. Mazzarino, R. Trocca, G. Volpe
In this paper our recent development in the research of computational models and algorithms for the real-time analysis of full-body human movement are presented. Our aim is to find methods and techniques to extract cues relevant to KANSEI and emotional content in human expressive gesture in real time. Analysis of expressiveness in human gestures can contribute to new paradigms for the design of improved human-robot interfaces. As a main concrete result of our research work, a software platform named EyesWeb has been developed and is distributed for free (www.eyesweb.org). EyesWeb supports research in multimodal interaction, and provides a concrete tool for developing real-time interactive applications. Human movement analysis is provided by means of a library of algorithms for sensors and video processing, features extraction, gesture segmentation, etc. A visual environment is provided to compose such basic algorithms in order to develop more sophisticated analysis techniques.
本文介绍了人体全身运动实时分析的计算模型和算法的最新研究进展。我们的目标是寻找方法和技术来实时提取与人类表达手势中的感性和情感内容相关的线索。分析人类手势的表现力可以为改进人机界面的设计提供新的范例。作为我们研究工作的主要具体成果,一个名为eyeesweb的软件平台已经开发出来并免费发布(www.eyesweb.org)。eyeesweb支持多模态交互的研究,并为开发实时交互应用程序提供了一个具体的工具。通过传感器和视频处理、特征提取、手势分割等算法库提供人体运动分析。为了开发更复杂的分析技术,提供了一个可视化的环境来组成这样的基本算法。
{"title":"Improving the man-machine interface through the analysis of expressiveness in human movement","authors":"A. Camurri, P. Coletta, B. Mazzarino, R. Trocca, G. Volpe","doi":"10.1109/ROMAN.2002.1045658","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045658","url":null,"abstract":"In this paper our recent development in the research of computational models and algorithms for the real-time analysis of full-body human movement are presented. Our aim is to find methods and techniques to extract cues relevant to KANSEI and emotional content in human expressive gesture in real time. Analysis of expressiveness in human gestures can contribute to new paradigms for the design of improved human-robot interfaces. As a main concrete result of our research work, a software platform named EyesWeb has been developed and is distributed for free (www.eyesweb.org). EyesWeb supports research in multimodal interaction, and provides a concrete tool for developing real-time interactive applications. Human movement analysis is provided by means of a library of algorithms for sensors and video processing, features extraction, gesture segmentation, etc. A visual environment is provided to compose such basic algorithms in order to develop more sophisticated analysis techniques.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123901303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Intuitive teaching and surveillance for production assistants 对生产助理进行直观的教学和监督
S. Estable, I. Ahms, H. Backhaus, O. El Zubi, R. Muenstermann
The increased use of production assistants will allow new factory requirements to be fulfilled like the production of small series, the reduction of innovation cycles and the optimization of factory workload. The possible components of such a production assistant, dedicated to object manipulation tasks, has been investigated by Astrium in the project MORPHA. Two features seem to describe such an assistant system: intuitive teaching and surveillance. Thus, three main components have been specified and implemented: pose estimation skills, intuitive trajectory generation and surveillance for workspace sharing. These components are described and the results evaluated.
生产助理的增加使用将使新的工厂需求得以满足,如小批量生产,减少创新周期和优化工厂工作量。Astrium在MORPHA项目中研究了这种专门用于对象操作任务的生产助手的可能组件。这样一个辅助系统似乎有两个特点:直观的教学和监控。因此,三个主要组成部分已经被指定和实现:姿态估计技能,直观的轨迹生成和工作空间共享监视。描述了这些组成部分并对结果进行了评估。
{"title":"Intuitive teaching and surveillance for production assistants","authors":"S. Estable, I. Ahms, H. Backhaus, O. El Zubi, R. Muenstermann","doi":"10.1109/ROMAN.2002.1045667","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045667","url":null,"abstract":"The increased use of production assistants will allow new factory requirements to be fulfilled like the production of small series, the reduction of innovation cycles and the optimization of factory workload. The possible components of such a production assistant, dedicated to object manipulation tasks, has been investigated by Astrium in the project MORPHA. Two features seem to describe such an assistant system: intuitive teaching and surveillance. Thus, three main components have been specified and implemented: pose estimation skills, intuitive trajectory generation and surveillance for workspace sharing. These components are described and the results evaluated.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124639572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A prototype robot speech interface with multimodal feedback 具有多模态反馈的机器人语音界面原型
M. Haage, S. schotz, P. Nugues
Speech recognition is available on ordinary personal computers and is starting to appear in standard software applications. A known problem with speech interfaces is their integration into current graphical user interfaces. This paper reports on a prototype developed for studying integration of speech into graphical interfaces aimed towards programming of industrial robot arms. The aim of the prototype is to develop a speech system for designing robot trajectories that would fit well with current CAD paradigms.
语音识别可以在普通个人电脑上使用,并开始出现在标准软件应用程序中。语音界面的一个已知问题是它们与当前图形用户界面的集成。本文报道了一种用于研究语音与图形界面集成的原型,用于工业机械臂编程。该原型的目的是开发一个语音系统,用于设计机器人轨迹,该系统将很好地适应当前的CAD范例。
{"title":"A prototype robot speech interface with multimodal feedback","authors":"M. Haage, S. schotz, P. Nugues","doi":"10.1109/ROMAN.2002.1045630","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045630","url":null,"abstract":"Speech recognition is available on ordinary personal computers and is starting to appear in standard software applications. A known problem with speech interfaces is their integration into current graphical user interfaces. This paper reports on a prototype developed for studying integration of speech into graphical interfaces aimed towards programming of industrial robot arms. The aim of the prototype is to develop a speech system for designing robot trajectories that would fit well with current CAD paradigms.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127964072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Natural language instructions for joint spatial reference between naive users and a mobile robot 单纯用户与移动机器人之间联合空间参考的自然语言指令
R. Moratz, T. Tenbrink
Many tasks in the field of service robotics could benefit from a natural language interface that allows human users to talk to the robot as naturally as possible. However, so far we lack information about what would be natural to human users, as most experimental robotic systems involving natural language developed so far have not been systematically tested with human users unfamiliar with the system. In our simple scenario, human users refer to objects via their location rather than feature descriptions. Our robot uses a computational model of spatial reference to interpret the linguistic instructions. In experiments with naive users we test the adequacy of the model for achieving joint spatial reference. We show how our approach can be extended to more complex spatial tasks in natural human-robot interaction.
服务机器人领域的许多任务都可以从自然语言界面中受益,它允许人类用户尽可能自然地与机器人交谈。然而,到目前为止,我们缺乏关于人类用户自然语言的信息,因为迄今为止开发的大多数涉及自然语言的实验性机器人系统尚未与不熟悉该系统的人类用户进行系统测试。在我们的简单场景中,人类用户通过位置而不是功能描述来引用对象。我们的机器人使用空间参考的计算模型来解释语言指令。在与幼稚用户的实验中,我们测试了模型在实现联合空间参考方面的充分性。我们展示了如何将我们的方法扩展到自然人机交互中更复杂的空间任务。
{"title":"Natural language instructions for joint spatial reference between naive users and a mobile robot","authors":"R. Moratz, T. Tenbrink","doi":"10.1109/ROMAN.2002.1045627","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045627","url":null,"abstract":"Many tasks in the field of service robotics could benefit from a natural language interface that allows human users to talk to the robot as naturally as possible. However, so far we lack information about what would be natural to human users, as most experimental robotic systems involving natural language developed so far have not been systematically tested with human users unfamiliar with the system. In our simple scenario, human users refer to objects via their location rather than feature descriptions. Our robot uses a computational model of spatial reference to interpret the linguistic instructions. In experiments with naive users we test the adequacy of the model for achieving joint spatial reference. We show how our approach can be extended to more complex spatial tasks in natural human-robot interaction.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129093010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Learning object-specific vision-based manipulation in virtual environments 在虚拟环境中学习特定对象的基于视觉的操作
A. Matsikis, T. Zoumpoulidis, F.H. Broicher, K. Kraiss
In this paper a method for learning object-specific vision-based manipulation is described. The proposed approach uses a virtual environment containing models of the objects and the manipulator with an eye-in-hand camera to simplify and automate the training procedure. An object with a form that requires a unique final gripper position and orientation was used to train and test the implemented algorithms. A series of smooth paths leading to the final position are generated based on a typical path defined by an operator. Images and corresponding manipulator positions along the produced paths are gathered in the virtual environment and used for the training of a vision-based controller. The controller uses a structure of radial-basis function (RBF) networks and has to execute a long reaching movement that guides the manipulator to the final position so that afterwards only minor justification of the gripper is needed to complete the grasp.
本文描述了一种学习基于特定对象的视觉操作的方法。该方法采用一个包含物体模型和机械手模型的虚拟环境,并带有眼控相机,以简化和自动化训练过程。使用一个具有唯一的最终夹持器位置和方向的形状的对象来训练和测试实现的算法。基于算子定义的典型路径,生成一系列通向最终位置的平滑路径。将生成路径上的图像和相应的机械手位置收集到虚拟环境中,用于训练基于视觉的控制器。控制器采用径向基函数(RBF)网络结构,必须执行长到达运动,引导机械手到达最终位置,以便之后只需对夹持器进行轻微的调整即可完成抓取。
{"title":"Learning object-specific vision-based manipulation in virtual environments","authors":"A. Matsikis, T. Zoumpoulidis, F.H. Broicher, K. Kraiss","doi":"10.1109/ROMAN.2002.1045623","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045623","url":null,"abstract":"In this paper a method for learning object-specific vision-based manipulation is described. The proposed approach uses a virtual environment containing models of the objects and the manipulator with an eye-in-hand camera to simplify and automate the training procedure. An object with a form that requires a unique final gripper position and orientation was used to train and test the implemented algorithms. A series of smooth paths leading to the final position are generated based on a typical path defined by an operator. Images and corresponding manipulator positions along the produced paths are gathered in the virtual environment and used for the training of a vision-based controller. The controller uses a structure of radial-basis function (RBF) networks and has to execute a long reaching movement that guides the manipulator to the final position so that afterwards only minor justification of the gripper is needed to complete the grasp.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132368780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
CORA: An anthropomorphic robot assistant for human environment CORA:人类环境的拟人化机器人助手
I. Iossifidis, C. Bruckhoff, C. Theis, C. Grote, C. Faubel, G. Schoner
We describe the general concept, system architecture, hardware, and the behavioral abilities of CORA (Cooperative Robot Assistant), an autonomous nonmobile robot assistant. Outgoing from our basic assumption that the behavior to perform determines the internal and external structure of the behaving system, we have designed CORA anthropomorphic to allow for humanlike behavioral strategies in solving complex tasks. Although CORA was built as a prototype of a service robot system to assist a human partner in industrial assembly tasks, we will show that CORA's behavioral abilities are also conferrable in a household environment. After the description of the hardware platform and the basic concepts of our approach, we present some experimental results by means of an assembly task.
本文介绍了自主非移动机器人助手CORA (Cooperative Robot Assistant)的一般概念、系统架构、硬件和行为能力。从我们的基本假设出发,即执行的行为决定了行为系统的内部和外部结构,我们设计了CORA拟人化,以允许在解决复杂任务时采用类似人类的行为策略。尽管CORA是作为服务机器人系统的原型而构建的,用于协助人类伙伴完成工业装配任务,但我们将展示CORA的行为能力也适用于家庭环境。在描述了硬件平台和我们的方法的基本概念之后,我们给出了一些通过汇编任务的实验结果。
{"title":"CORA: An anthropomorphic robot assistant for human environment","authors":"I. Iossifidis, C. Bruckhoff, C. Theis, C. Grote, C. Faubel, G. Schoner","doi":"10.1109/ROMAN.2002.1045654","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045654","url":null,"abstract":"We describe the general concept, system architecture, hardware, and the behavioral abilities of CORA (Cooperative Robot Assistant), an autonomous nonmobile robot assistant. Outgoing from our basic assumption that the behavior to perform determines the internal and external structure of the behaving system, we have designed CORA anthropomorphic to allow for humanlike behavioral strategies in solving complex tasks. Although CORA was built as a prototype of a service robot system to assist a human partner in industrial assembly tasks, we will show that CORA's behavioral abilities are also conferrable in a household environment. After the description of the hardware platform and the basic concepts of our approach, we present some experimental results by means of an assembly task.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121230966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
The role of cognitive agent models in a multi-agent framework for human-humanoid interaction 认知智能体模型在人机交互多智能体框架中的作用
K. Kawamura
Partnership between a human and robot could be enhanced if the robot were intelligent enough to understand human intention and adapt its behavior. In this paper, we will describe a multi-agent framework for robot control and human-robot interaction. Cognitive agent models called the Self Agent and the Human Agent are being developed to achieve this goal.
如果机器人足够聪明,能够理解人类的意图并适应人类的行为,那么人与机器人之间的伙伴关系就可以得到加强。在本文中,我们将描述一个用于机器人控制和人机交互的多智能体框架。认知代理模型称为自我代理和人类代理正在开发实现这一目标。
{"title":"The role of cognitive agent models in a multi-agent framework for human-humanoid interaction","authors":"K. Kawamura","doi":"10.1109/ROMAN.2002.1045602","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045602","url":null,"abstract":"Partnership between a human and robot could be enhanced if the robot were intelligent enough to understand human intention and adapt its behavior. In this paper, we will describe a multi-agent framework for robot control and human-robot interaction. Cognitive agent models called the Self Agent and the Human Agent are being developed to achieve this goal.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"os-30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127772797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1