首页 > 最新文献

Proceedings. The 4nd International Conference on Development and Learning, 2005.最新文献

英文 中文
From Unknown Sensors and Actuators to Visually Guided Movement 从未知传感器和致动器到视觉引导运动
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490934
L. Olsson, C. Nehaniv, D. Polani
This paper describes a developmental system implemented on a real robot that learns a model of its own sensory and actuator apparatuses. There is no innate knowledge regarding the modality or representation of the sensoric input and the actuators, and the system relies on generic properties of the robot's world such as piecewise smooth effects of movement on sensory changes. The robot develops the model of its sensorimotor system by first performing random movements to create an informational map of the sensors. Using this map the robot then learns what effects the different possible actions have on the sensors. After this developmental process the robot can perform simple motion tracking
本文描述了一个在真实机器人上实现的开发系统,该系统可以学习自己的感觉和执行装置的模型。关于感官输入和执行器的形态或表示没有固有的知识,系统依赖于机器人世界的一般属性,例如运动对感官变化的分段平滑效应。机器人通过首先执行随机运动来创建传感器的信息地图,从而开发其感觉运动系统模型。利用这张地图,机器人可以了解不同可能的动作对传感器的影响。经过这个发展过程,机器人可以进行简单的运动跟踪
{"title":"From Unknown Sensors and Actuators to Visually Guided Movement","authors":"L. Olsson, C. Nehaniv, D. Polani","doi":"10.1109/DEVLRN.2005.1490934","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490934","url":null,"abstract":"This paper describes a developmental system implemented on a real robot that learns a model of its own sensory and actuator apparatuses. There is no innate knowledge regarding the modality or representation of the sensoric input and the actuators, and the system relies on generic properties of the robot's world such as piecewise smooth effects of movement on sensory changes. The robot develops the model of its sensorimotor system by first performing random movements to create an informational map of the sensors. Using this map the robot then learns what effects the different possible actions have on the sensors. After this developmental process the robot can perform simple motion tracking","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124948040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Information Self-Structuring: Key Principle for Learning and Development 信息自结构:学习和发展的关键原则
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490938
M. Lungarella, O. Sporns
Intelligence and intelligence-like processes are characterized by a complex yet balanced interplay across multiple time scales between an agent's brain, body, and environment. Through sensor and motor activity natural organisms and robots are continuously and dynamically coupled to their environments. We argue that such coupling represents a major functional rationale for the ability of embodied agents to actively structure their sensory input and to generate statistical regularities. Such regularities in the multimodal sensory data relayed to the brain are critical for enabling appropriate developmental processes, perceptual categorization, adaptation, and learning. We show how information theoretical measures can be used to quantify statistical structure in sensory and motor channels of a robot capable of saliency-driven, attention-guided behavior. We also discuss the potential importance of such measures for understanding sensorimotor coordination in organisms (in particular, visual attention) and for robot design
智能和类智能过程的特点是智能主体的大脑、身体和环境之间在多个时间尺度上复杂而平衡的相互作用。通过传感器和运动活动,自然生物和机器人不断地、动态地与它们的环境耦合。我们认为,这种耦合代表了具身代理主动构建其感觉输入并产生统计规律的能力的主要功能原理。传递给大脑的多模态感觉数据中的这种规律对于实现适当的发育过程、感知分类、适应和学习至关重要。我们展示了如何使用信息理论测量来量化具有显著性驱动和注意引导行为的机器人的感觉和运动通道中的统计结构。我们还讨论了这些措施对理解生物体的感觉运动协调(特别是视觉注意)和机器人设计的潜在重要性
{"title":"Information Self-Structuring: Key Principle for Learning and Development","authors":"M. Lungarella, O. Sporns","doi":"10.1109/DEVLRN.2005.1490938","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490938","url":null,"abstract":"Intelligence and intelligence-like processes are characterized by a complex yet balanced interplay across multiple time scales between an agent's brain, body, and environment. Through sensor and motor activity natural organisms and robots are continuously and dynamically coupled to their environments. We argue that such coupling represents a major functional rationale for the ability of embodied agents to actively structure their sensory input and to generate statistical regularities. Such regularities in the multimodal sensory data relayed to the brain are critical for enabling appropriate developmental processes, perceptual categorization, adaptation, and learning. We show how information theoretical measures can be used to quantify statistical structure in sensory and motor channels of a robot capable of saliency-driven, attention-guided behavior. We also discuss the potential importance of such measures for understanding sensorimotor coordination in organisms (in particular, visual attention) and for robot design","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125442291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 93
Does Gaze Reveal the Human Likeness of an Android? 凝视是否揭示了Android的人类相似性?
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490953
T. Minato, Michihiro Shimada, S. Itakura, Kang Lee, H. Ishiguro
The development of androids that closely resemble human beings enables as to investigate many phenomena related to human interaction that could not otherwise be investigated with mechanical-looking robots. This is because more humanlike devices are in a better position to elicit the kinds of responses that people direct toward each other. In particular, we cannot ignore the role of appearance in giving us a subjective impression of human presence or intelligence. However, this impression is influenced by behavior and the complex relationship between appearance and behavior. We propose a hypothesis about how appearance and behavior are related and map out a plan for android research to investigate the hypothesis. We then examine a study that evaluates the behavior of androids according to the patterns of gaze fixations they elicit. Studies such as these, which integrate the development of androids with the investigation of human behavior, constitute a new research area that fuses engineering and science
与人类非常相似的机器人的发展使我们能够研究许多与人类互动有关的现象,否则这些现象是机械外观的机器人无法研究的。这是因为更像人类的设备更容易引发人们对彼此的反应。特别是,我们不能忽视外表在给我们对人的存在或智力的主观印象方面的作用。然而,这种印象受到行为以及外表与行为之间复杂关系的影响。我们提出了一个关于外表和行为如何相关的假设,并为机器人研究制定了一个计划来调查这一假设。然后,我们检查了一项研究,该研究根据机器人引发的凝视模式来评估它们的行为。诸如此类的研究将机器人的开发与人类行为的调查相结合,构成了一个融合工程和科学的新研究领域
{"title":"Does Gaze Reveal the Human Likeness of an Android?","authors":"T. Minato, Michihiro Shimada, S. Itakura, Kang Lee, H. Ishiguro","doi":"10.1109/DEVLRN.2005.1490953","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490953","url":null,"abstract":"The development of androids that closely resemble human beings enables as to investigate many phenomena related to human interaction that could not otherwise be investigated with mechanical-looking robots. This is because more humanlike devices are in a better position to elicit the kinds of responses that people direct toward each other. In particular, we cannot ignore the role of appearance in giving us a subjective impression of human presence or intelligence. However, this impression is influenced by behavior and the complex relationship between appearance and behavior. We propose a hypothesis about how appearance and behavior are related and map out a plan for android research to investigate the hypothesis. We then examine a study that evaluates the behavior of androids according to the patterns of gaze fixations they elicit. Studies such as these, which integrate the development of androids with the investigation of human behavior, constitute a new research area that fuses engineering and science","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126715562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Motion-triggered human-robot synchronization for autonomous acquisition of joint attention 运动触发的人-机器人同步自主获取联合注意
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490980
H. Sumioka, K. Hosoda, Y. Yoshikawa, M. Asada
Joint attention, a behavior to attend to an object to which another person attends, is an important element not only for human-human communication but also human-robot communication. Building a robot that autonomously acquires the behavior is supposed to be a formidable issue both to establish the design principle of a robot communicating with humans and to understand the developmental process of human communication. To accelerate learning of the behavior, the motion synchronization among the object, the caregiver, and the robot is important since it ensures the information consistency between them. In this paper, we propose a control architecture to utilize the motion information for synchronization necessary to find the consistency. The task given for the caregiver is to pick up an object on the table and to investigate it with his/her hands, which is a quite natural task for humans. If only the caregiver can move the objects in the environment, the observed motion is that of the caregiver's face and/or that of the object moved by him/her. When the caregiver is looking around to find an interesting object, the image flow of the face is observed. After he/she fixates the object and picks it up, the flow of the face stops and that of the object is observed
共同注意是指关注另一个人关注的对象的行为,它不仅是人与人之间交流的重要因素,也是人与机器人之间交流的重要因素。建立与人交流的机器人的设计原则和了解人类交流的发展过程,都是一个非常艰巨的课题。为了加速行为的学习,物体、看护者和机器人之间的运动同步是重要的,因为它确保了它们之间的信息一致性。在本文中,我们提出了一种控制架构,利用运动信息进行同步,以找到一致性。看护人的任务是拿起桌子上的一个物体,用他/她的手去研究它,这对人类来说是一项很自然的任务。如果只有照顾者可以移动环境中的物体,观察到的运动是照顾者的脸和/或他/她移动的物体的运动。当看护者环顾四周寻找一个有趣的物体时,可以观察到面部的图像流。当他/她盯着物体并拿起它时,脸部的流动停止,观察物体的流动
{"title":"Motion-triggered human-robot synchronization for autonomous acquisition of joint attention","authors":"H. Sumioka, K. Hosoda, Y. Yoshikawa, M. Asada","doi":"10.1109/DEVLRN.2005.1490980","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490980","url":null,"abstract":"Joint attention, a behavior to attend to an object to which another person attends, is an important element not only for human-human communication but also human-robot communication. Building a robot that autonomously acquires the behavior is supposed to be a formidable issue both to establish the design principle of a robot communicating with humans and to understand the developmental process of human communication. To accelerate learning of the behavior, the motion synchronization among the object, the caregiver, and the robot is important since it ensures the information consistency between them. In this paper, we propose a control architecture to utilize the motion information for synchronization necessary to find the consistency. The task given for the caregiver is to pick up an object on the table and to investigate it with his/her hands, which is a quite natural task for humans. If only the caregiver can move the objects in the environment, the observed motion is that of the caregiver's face and/or that of the object moved by him/her. When the caregiver is looking around to find an interesting object, the image flow of the face is observed. After he/she fixates the object and picks it up, the flow of the face stops and that of the object is observed","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122800930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Imitation faculty based on a simple visuo-motor mapping towards interaction rule learning with a human partner 模仿能力基于简单的视觉-运动映射到与人类伙伴的交互规则学习
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490964
M. Ogino, H. Toichi, M. Asada, Y. Yoshikawa
Imitation has been regarded as one of the key technologies indispensable for communication since the mirror neuron made a sensation not only in physiology but also in other disciplines such as cognitive science, and even robotics. This paper is aimed at building a human-robot communication system and proposes an observation-to-motion mapping system as the first step towards the final goal of learning natural communication. This system enables a humanoid platform to imitate the observed human motion, that is, a mapping from observed human motion data to its own motor commands. To validate the effectiveness of the proposed system, we examine whether the robot can acquire the interaction rule in an environment in which a human motion occurs under an artificial interaction rule
模仿被认为是沟通不可或缺的关键技术之一,因为镜像神经元不仅在生理学上产生感觉,而且在认知科学甚至机器人等其他学科中也产生感觉。本文旨在建立一个人机通信系统,并提出了一个观察-运动映射系统作为学习自然通信的最终目标的第一步。该系统使仿人平台能够模仿观察到的人体运动,即将观察到的人体运动数据映射到自身的运动命令。为了验证所提出系统的有效性,我们考察了机器人是否能够在人工交互规则下的人类运动环境中获得交互规则
{"title":"Imitation faculty based on a simple visuo-motor mapping towards interaction rule learning with a human partner","authors":"M. Ogino, H. Toichi, M. Asada, Y. Yoshikawa","doi":"10.1109/DEVLRN.2005.1490964","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490964","url":null,"abstract":"Imitation has been regarded as one of the key technologies indispensable for communication since the mirror neuron made a sensation not only in physiology but also in other disciplines such as cognitive science, and even robotics. This paper is aimed at building a human-robot communication system and proposes an observation-to-motion mapping system as the first step towards the final goal of learning natural communication. This system enables a humanoid platform to imitate the observed human motion, that is, a mapping from observed human motion data to its own motor commands. To validate the effectiveness of the proposed system, we examine whether the robot can acquire the interaction rule in an environment in which a human motion occurs under an artificial interaction rule","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114368726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Dynamic Evolution of Language Games between two Autonomous Robots 自主机器人间语言博弈的动态演化
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490946
Jean-Christophe Baillie, M. Nottale
The "talking robots" experiment, inspired by the "talking heads" experiment from Sony, explores possibilities on how to ground symbols into perception. We present here the first results of this experiment and outline a possible extension to social behaviors grounding: the purpose is to have the robots develop not only a lexicon but also the interaction protocol, or language game that they use to create the lexicon. This raises several complex problems that we review here
“会说话的机器人”实验受到索尼公司“会说话的头”实验的启发,探索了如何将符号根植于感知的可能性。我们在这里展示了这个实验的第一个结果,并概述了社会行为基础的可能扩展:目的是让机器人不仅开发一个词典,还开发交互协议,或者他们用来创建词典的语言游戏。这引起了我们在这里回顾的几个复杂问题
{"title":"Dynamic Evolution of Language Games between two Autonomous Robots","authors":"Jean-Christophe Baillie, M. Nottale","doi":"10.1109/DEVLRN.2005.1490946","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490946","url":null,"abstract":"The \"talking robots\" experiment, inspired by the \"talking heads\" experiment from Sony, explores possibilities on how to ground symbols into perception. We present here the first results of this experiment and outline a possible extension to social behaviors grounding: the purpose is to have the robots develop not only a lexicon but also the interaction protocol, or language game that they use to create the lexicon. This raises several complex problems that we review here","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129774250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Plans for Developing Real-time Dance Interaction between QRIO and Toddlers in a Classroom Environment 在教室环境中发展QRIO和幼儿之间实时舞蹈互动的计划
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490963
F. Tanaka, B. Fortenberry, K. Aisaka, J. Movellan
This paper introduces the early stages of a study designed to understand the development of dance interactions between QRIO and toddlers in a classroom environment. The study is part of a project to explore the potential use of interactive robots as instructional tools in education. After 3 months observation period, we are starting the experiment. After explaining the experimental environment, component technologies used in it are described: an interactive dance with visual feedback, exploiting the active detection of contingency and robotic emotion expression
本文介绍了一项旨在了解QRIO和幼儿在课堂环境中舞蹈互动发展的研究的早期阶段。这项研究是一个项目的一部分,该项目旨在探索交互式机器人作为教育教学工具的潜在用途。经过3个月的观察期,我们开始实验。在解释了实验环境之后,描述了其中使用的组件技术:具有视觉反馈的交互式舞蹈,利用主动检测偶然性和机器人情感表达
{"title":"Plans for Developing Real-time Dance Interaction between QRIO and Toddlers in a Classroom Environment","authors":"F. Tanaka, B. Fortenberry, K. Aisaka, J. Movellan","doi":"10.1109/DEVLRN.2005.1490963","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490963","url":null,"abstract":"This paper introduces the early stages of a study designed to understand the development of dance interactions between QRIO and toddlers in a classroom environment. The study is part of a project to explore the potential use of interactive robots as instructional tools in education. After 3 months observation period, we are starting the experiment. After explaining the experimental environment, component technologies used in it are described: an interactive dance with visual feedback, exploiting the active detection of contingency and robotic emotion expression","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125787951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Self-development of motor abilities resulting from the growth of a neural network reinforced by pleasure and tensions 运动能力的自我发展是由于神经网络的成长而被快乐和紧张所加强的
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490956
Juan Liu, A. Buller
We present a novel method of machine learning toward emergent motor behaviors. The method is based on a growing neural network that initially produces senseless signals but later associates rewarding signals and quasi-rewarding signals with recent perceptions and motor activities and, based on these data, incorporates new cells and creates new connections. The rewarding signals are produced in a device that plays a role of a "pleasure center", whereas the quasi-rewarding signals (that represent pleasure expectation) are generated by the network itself. The network was tested using a simulated mobile robot equipped with a pair of motors, a set of touch sensors, and a camera. Despite a lack of innate wiring for a useful behavior, the robot learned without an external guidance how to avoid obstacles and approach an object of interest, which is fundamental for creatures and usually handcrafted in traditional robotic systems
我们提出了一种针对紧急运动行为的机器学习新方法。该方法基于一个不断增长的神经网络,该网络最初产生无意义的信号,但后来将奖励信号和准奖励信号与最近的感知和运动活动联系起来,并根据这些数据合并新的细胞并创建新的连接。奖励信号是在一个扮演“快乐中心”角色的设备中产生的,而准奖励信号(代表快乐期望)是由网络本身产生的。该网络使用一个模拟的移动机器人进行测试,该机器人配备了一对马达、一组触摸传感器和一个摄像头。尽管缺乏有用行为的先天连接,机器人在没有外部指导的情况下学会了如何避开障碍物和接近感兴趣的物体,这对生物来说是基本的,通常是传统机器人系统中手工制作的
{"title":"Self-development of motor abilities resulting from the growth of a neural network reinforced by pleasure and tensions","authors":"Juan Liu, A. Buller","doi":"10.1109/DEVLRN.2005.1490956","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490956","url":null,"abstract":"We present a novel method of machine learning toward emergent motor behaviors. The method is based on a growing neural network that initially produces senseless signals but later associates rewarding signals and quasi-rewarding signals with recent perceptions and motor activities and, based on these data, incorporates new cells and creates new connections. The rewarding signals are produced in a device that plays a role of a \"pleasure center\", whereas the quasi-rewarding signals (that represent pleasure expectation) are generated by the network itself. The network was tested using a simulated mobile robot equipped with a pair of motors, a set of touch sensors, and a camera. Despite a lack of innate wiring for a useful behavior, the robot learned without an external guidance how to avoid obstacles and approach an object of interest, which is fundamental for creatures and usually handcrafted in traditional robotic systems","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117170562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Distinguishing Intentional Actions from Accidental Actions 区分有意行为和意外行为
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490972
K. Harui, N. Oka, Y. Yamada
Summary form only given. Although even human infants have the ability to recognize intention by Meltzoff (1995) and Tomasello (1997), its engineering realization has not been established yet. It is important to realize a man-machine interface which can adapt naturally to human by guessing whether the behavior of human is intentional or accidental. Various information, for example, voice, facial expression, and gesture can be used to distinguish whether a behavior is intentional or not, we however pay attention to the prosody and the timing of utterances in this study, because when one did an accidental movement, we think that he tends to utter words, e.g. `oops', in a characteristic fashion unintentionally. In this study, a video game was built in which one can play an agent with a ball and recorded the interaction between a subject and the agent. Then, a system was built using a decision tree by Quinlan (1996) that learns to distinguish intentional actions of subjects from accidental ones, and analyzed the precision of the trees. Continuous inputs for C4.5 algorithm, and discretized inputs at regular intervals for ID3 algorithm were used. The difference in inputs is the cause of the difference in the precision in table I
只提供摘要形式。尽管Meltzoff(1995)和Tomasello(1997)指出,即使是人类婴儿也具有识别意图的能力,但其工程实现尚未建立。通过猜测人的行为是有意的还是偶然的,实现能自然适应人的人机界面是很重要的。声音、面部表情、手势等各种信息都可以用来区分一个行为是有意还是无意的,但是我们在本研究中更关注的是说话的韵律和时间,因为当一个人做了一个偶然的动作时,我们认为他倾向于说话,例如:“哎呀”,以一种无意中特有的方式。在这项研究中,我们制作了一个视频游戏,在这个游戏中,人们可以玩一个带球的代理,并记录受试者和代理之间的互动。然后,Quinlan(1996)使用决策树构建了一个系统,该系统学习区分主体的故意行为和偶然行为,并分析了树的精度。C4.5算法采用连续输入,ID3算法采用定时离散输入。输入的差异是表1中精度差异的原因
{"title":"Distinguishing Intentional Actions from Accidental Actions","authors":"K. Harui, N. Oka, Y. Yamada","doi":"10.1109/DEVLRN.2005.1490972","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490972","url":null,"abstract":"Summary form only given. Although even human infants have the ability to recognize intention by Meltzoff (1995) and Tomasello (1997), its engineering realization has not been established yet. It is important to realize a man-machine interface which can adapt naturally to human by guessing whether the behavior of human is intentional or accidental. Various information, for example, voice, facial expression, and gesture can be used to distinguish whether a behavior is intentional or not, we however pay attention to the prosody and the timing of utterances in this study, because when one did an accidental movement, we think that he tends to utter words, e.g. `oops', in a characteristic fashion unintentionally. In this study, a video game was built in which one can play an agent with a ball and recorded the interaction between a subject and the agent. Then, a system was built using a decision tree by Quinlan (1996) that learns to distinguish intentional actions of subjects from accidental ones, and analyzed the precision of the trees. Continuous inputs for C4.5 algorithm, and discretized inputs at regular intervals for ID3 algorithm were used. The difference in inputs is the cause of the difference in the precision in table I","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128375713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An Infomax Controller for Real Time Detection of Social Contingency 一种用于社会突发事件实时检测的Infomax控制器
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490937
J. Movellan
We present a model of behavior according to which organisms react to the environment in a manner that maximizes the information gained about events of interest. We call the approach "Infomax control" for it combines the theory of optimal control with information maximization models of perception. The approach is reactive, not cognitive, in that it is better described as a continuous "dance" of actions and reactions with the world, rather than a turn-taking inferential process like chess-playing. The approach however is intelligent in that it produces behaviors that optimize long-term information gain. We illustrate how Infomax control can be used to understand the detection of social contingency in 10 month old infants. The results suggest that, while lacking language, by this age infants actively "ask questions" to the environment, i.e., schedule their actions in a manner that maximizes the expected information return. A real time Infomax controller was implemented on a humanoid robot to detect people using contingency information. The system worked robustly requiring little bandwidth and computational cost
我们提出了一种行为模型,根据该模型,生物体对环境的反应方式可以最大限度地获得有关感兴趣事件的信息。我们称这种方法为“信息最大化控制”,因为它结合了最优控制理论和感知的信息最大化模型。这种方法是反应性的,而不是认知性的,因为它更适合被描述为与世界的行动和反应的连续“舞蹈”,而不是像下棋那样轮流进行推理过程。然而,这种方法是智能的,因为它产生了优化长期信息获取的行为。我们说明了如何使用Infomax控制来理解10个月大婴儿的社会偶然性检测。结果表明,虽然缺乏语言,但到这个年龄的婴儿会积极地向环境“提问”,即以最大化预期信息回报的方式安排他们的行动。在仿人机器人上实现了一种实时Infomax控制器,利用应急信息对人进行检测。该系统运行稳定,带宽和计算成本较低
{"title":"An Infomax Controller for Real Time Detection of Social Contingency","authors":"J. Movellan","doi":"10.1109/DEVLRN.2005.1490937","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490937","url":null,"abstract":"We present a model of behavior according to which organisms react to the environment in a manner that maximizes the information gained about events of interest. We call the approach \"Infomax control\" for it combines the theory of optimal control with information maximization models of perception. The approach is reactive, not cognitive, in that it is better described as a continuous \"dance\" of actions and reactions with the world, rather than a turn-taking inferential process like chess-playing. The approach however is intelligent in that it produces behaviors that optimize long-term information gain. We illustrate how Infomax control can be used to understand the detection of social contingency in 10 month old infants. The results suggest that, while lacking language, by this age infants actively \"ask questions\" to the environment, i.e., schedule their actions in a manner that maximizes the expected information return. A real time Infomax controller was implemented on a humanoid robot to detect people using contingency information. The system worked robustly requiring little bandwidth and computational cost","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131352155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
期刊
Proceedings. The 4nd International Conference on Development and Learning, 2005.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1