Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490950
W. Sanefuji, H. Ohgami, K. Hashiya
In natural settings, human infants tend to prefer infants to older children. Some laboratory-based studies reported that infants also show preference for adults, as much as for the age-mates. We showed that infants looked at infants longer than at children and that they showed banging behaviors more frequently while looking at infants and at adults than while looking at children. Our study suggested different cognitive basis for the infants' preference for infants and for adults: infants' preference for infants might be explained as a combination of the preference for babyish characteristics (same as adults) and the perceptual preference for similar others. On the other hand, the preference for adults might reflect the infants' daily learning through experience. Infants might prefer adults as familiar others
{"title":"'Infants' preference for infants and adults','','','','','','','','93','95',","authors":"W. Sanefuji, H. Ohgami, K. Hashiya","doi":"10.1109/DEVLRN.2005.1490950","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490950","url":null,"abstract":"In natural settings, human infants tend to prefer infants to older children. Some laboratory-based studies reported that infants also show preference for adults, as much as for the age-mates. We showed that infants looked at infants longer than at children and that they showed banging behaviors more frequently while looking at infants and at adults than while looking at children. Our study suggested different cognitive basis for the infants' preference for infants and for adults: infants' preference for infants might be explained as a combination of the preference for babyish characteristics (same as adults) and the perceptual preference for similar others. On the other hand, the preference for adults might reflect the infants' daily learning through experience. Infants might prefer adults as familiar others","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114331289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490951
T. Chaminade, D. W. Franklin, E. Oztop, G. Cheng
If humanoid robots are to become commonplace in our society, it is important to understand how they are perceived by humans. An influent model in social cognitive neuroscience posits that in human face-to-face interaction, the observation of another individual performing an action facilitates the execution of a similar action, and interferes with the execution of different action. In one interference experiment, null interference was reported when subjects observed an industrial robotic arm moving at a constant velocity perform an incongruent task, suggesting that this effect may be specific to interacting with other humans. This experimental paradigm was adapted to investigate how humanoid robots interfere with humans. Subjects performed rhythmic arm movements while observing either a human agent or humanoid robot performing either congruent or incongruent movements with comparable kinematics. The variance of the executed movements was used as a measure of the amount of interference in the movements. In a previous report, we reported that in contrast to the robotic arm, the humanoid robot caused a significant increase of the variance of the movement during the incongruent condition. In the present report we investigate the effect of the movement kinematics on the interference. The humanoid robot moved either with a biological motion, based on a realistic model of human motion, or with an artificial motion. We investigated the variance of the subjects' movement during the incongruent condition, with the hypothesis that it should be reduced for the artificial movement in comparison to the biological movement. We found a significant effect of the factors defining the experimental conditions, congruency and type of movements' kinematics, on the subjects' variation. Congruency was found to have the expected effect on the area, but the increase in incongruent conditions was only significant when the robot movements followed biological motion. This result implies that motion is a significant factor for the interference effect
{"title":"Motor interference between Humans and Humanoid Robots: Effect of Biological and Artificial Motion","authors":"T. Chaminade, D. W. Franklin, E. Oztop, G. Cheng","doi":"10.1109/DEVLRN.2005.1490951","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490951","url":null,"abstract":"If humanoid robots are to become commonplace in our society, it is important to understand how they are perceived by humans. An influent model in social cognitive neuroscience posits that in human face-to-face interaction, the observation of another individual performing an action facilitates the execution of a similar action, and interferes with the execution of different action. In one interference experiment, null interference was reported when subjects observed an industrial robotic arm moving at a constant velocity perform an incongruent task, suggesting that this effect may be specific to interacting with other humans. This experimental paradigm was adapted to investigate how humanoid robots interfere with humans. Subjects performed rhythmic arm movements while observing either a human agent or humanoid robot performing either congruent or incongruent movements with comparable kinematics. The variance of the executed movements was used as a measure of the amount of interference in the movements. In a previous report, we reported that in contrast to the robotic arm, the humanoid robot caused a significant increase of the variance of the movement during the incongruent condition. In the present report we investigate the effect of the movement kinematics on the interference. The humanoid robot moved either with a biological motion, based on a realistic model of human motion, or with an artificial motion. We investigated the variance of the subjects' movement during the incongruent condition, with the hypothesis that it should be reduced for the artificial movement in comparison to the biological movement. We found a significant effect of the factors defining the experimental conditions, congruency and type of movements' kinematics, on the subjects' variation. Congruency was found to have the expected effect on the area, but the increase in incongruent conditions was only significant when the robot movements followed biological motion. This result implies that motion is a significant factor for the interference effect","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124300006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490957
Y. Yoshikawa, Mamoru Yoshimura, K. Hosoda, M. Asada
Binding is one of the most fundamental cognitive functions, how to find the correspondence of sensations between different modalities. It is still unclear how to bind different sensor modalities such as vision and touch. Without a priori knowledge on its sensing structure it is a formidable issue for a robot even to match the foci of attention in different modalities since the sensory data from different sensors are not always caused from the same physical phenomenon. In this study, previous method to make a robot capable of quantizing touch sensors by itself was extended
{"title":"Visio-tactile binding through double-touching by a robot with an anthropomorphic tactile sensor","authors":"Y. Yoshikawa, Mamoru Yoshimura, K. Hosoda, M. Asada","doi":"10.1109/DEVLRN.2005.1490957","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490957","url":null,"abstract":"Binding is one of the most fundamental cognitive functions, how to find the correspondence of sensations between different modalities. It is still unclear how to bind different sensor modalities such as vision and touch. Without a priori knowledge on its sensing structure it is a formidable issue for a robot even to match the foci of attention in different modalities since the sensory data from different sensors are not always caused from the same physical phenomenon. In this study, previous method to make a robot capable of quantizing touch sensors by itself was extended","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116799956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490960
F. Kaplan, V. Hafner
This article presents a mathematical framework based on information theory to compare temporally-extended embodied sensorimotor organizations. Central to this approach is the notion of configuration: a set of distances between information sources, statistically evaluated for a given time span. Because information distances capture simultaneously effects of physical closeness, intermodality, functional relationship and external couplings, a configuration characterizes an embodied interaction with a particular environment. In this approach, collections of skills can be mapped in a unified space as configurations of configurations. This article describes these different abstractions in a formal manner and presents results of preliminary experiments showing how this framework can be used to capture the behavioral organization of an autonomous robot
{"title":"Mapping the space of skills: An approach for comparing embodied sensorimotor organizations","authors":"F. Kaplan, V. Hafner","doi":"10.1109/DEVLRN.2005.1490960","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490960","url":null,"abstract":"This article presents a mathematical framework based on information theory to compare temporally-extended embodied sensorimotor organizations. Central to this approach is the notion of configuration: a set of distances between information sources, statistically evaluated for a given time span. Because information distances capture simultaneously effects of physical closeness, intermodality, functional relationship and external couplings, a configuration characterizes an embodied interaction with a particular environment. In this approach, collections of skills can be mapped in a unified space as configurations of configurations. This article describes these different abstractions in a formal manner and presents results of preliminary experiments showing how this framework can be used to capture the behavioral organization of an autonomous robot","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"482 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122587194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490935
C. Nabeshima, M. Lungarella, Y. Kuniyoshi
The multisensory representation of our body (body schema), and its conscious and manipulable counterpart (body image) play a pivotal role in the development and expression of many higher level cognitive functions, such as tool use, imitation, spatial perception, and self-awareness. This paper addresses the issue of how the body schema changes as a result of tool use-dependent experience. Although it is plausible to assume that such an alteration is inevitable, the mechanisms underlying such plasticity have yet to be clarified. To tackle the problem, we propose a novel model of body schema adaptation which we instantiate in a tool using robot. Our experimental results confirm the validity of our model. They also show that timing is a particularly important feature of our model because it supports the integration of visual, tactile, and proprioceptive sensory information. We hope that the approach exposed in this study allows gaining further insights into the development of tool use skills and its relationship to body schema plasticity
{"title":"Timing-Based Model of Body Schema Adaptation and its Role in Perception and Tool Use: A Robot Case Study","authors":"C. Nabeshima, M. Lungarella, Y. Kuniyoshi","doi":"10.1109/DEVLRN.2005.1490935","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490935","url":null,"abstract":"The multisensory representation of our body (body schema), and its conscious and manipulable counterpart (body image) play a pivotal role in the development and expression of many higher level cognitive functions, such as tool use, imitation, spatial perception, and self-awareness. This paper addresses the issue of how the body schema changes as a result of tool use-dependent experience. Although it is plausible to assume that such an alteration is inevitable, the mechanisms underlying such plasticity have yet to be clarified. To tackle the problem, we propose a novel model of body schema adaptation which we instantiate in a tool using robot. Our experimental results confirm the validity of our model. They also show that timing is a particularly important feature of our model because it supports the integration of visual, tactile, and proprioceptive sensory information. We hope that the approach exposed in this study allows gaining further insights into the development of tool use skills and its relationship to body schema plasticity","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124736749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490962
A. Arita, K. Hiraki, T. Kanda, H. Ishiguro
Summary form only given. As technology advances, many human-like robots are being developed. These humanoid robots should be classified as inanimate objects; however, they share many properties with human beings. This raises the question of how infants classify them. Developmental psychology has addressed the issue of how infants come to characterize humans as agents having mental states that is indispensable foundation for sociality. Some studies suggest that infants attribute mental states only to humans. For instance, Legerstee et al. (2000) found that 6-month-old infants do expect people to communicate with people, not with objects. These results indicate that human cognition specializes in human in early infancy. Other studies have suggested, however, that infants attribute mental states to non-human objects that appear to be interactive with a person. For instance, Johnson et al. (1999) indicated that 12-month-old infants did gaze following to a non-human but interactive stuff. These results imply that interactivity between humans and objects is the key factor in mental attribution, however, interesting questions remain to be answered: do infants also have expectation for robots to communicate with person? In this study, we investigated whether 6-month-old infants expected an experimenter to talk to a humanoid robot "Robovie" [Ishiguro, et al., (2001) using infants' looking time as a measurement of violation-of-expectation. Violation-of-expectation method uses infants' property that they look longer at the event that they do not expect than at the event that they expect. During test trials, we show infants the stimulus in which an actor talks to the robot and another person. If infants regard robots as communicative existence like human, they will not be surprised and look at the robot as long as at the person. But if infants do not attribute communicational property to robots, they will look longer at the robot than at the person. To show infants how the robot behaved and interacted with people, we added a familiarization period prior to the test trials, which phase provided infants with prior knowledge about the robots. The stimuli in the familiarization of these conditions are as follows: 1) interactive robot condition: the robot behaved like a human, and the person and the robot interacted with each other; 2) non-active robot condition: the robot was stationary and the person was both active and talked to the robot; 3) active robot condition: the robot behaved like a human, and the person was stationary and silent. If the robots' appearance is dominant for expectation, the results of all condition are same. If robot' action is dominant, the results of the interactive robot condition and the active robot condition are same. And if human-robot interaction is dominant, the result of the interactive robot condition is only different. In the results, infants who had watched the interactive robot looked at the robot as long as at the person.
只提供摘要形式。随着科技的进步,许多类人机器人正在被开发。这些类人机器人应该被归类为无生命的物体;然而,它们与人类有许多共同的特性。这就提出了婴儿如何对它们进行分类的问题。发展心理学解决了婴儿如何将人类描述为具有社会不可缺少的精神状态的代理人的问题。一些研究表明,婴儿只把精神状态归因于人类。例如,Legerstee等人(2000)发现,6个月大的婴儿确实希望人们与人交流,而不是与物体交流。这些结果表明,人类的认知专一于婴儿期的人类。然而,其他研究表明,婴儿将精神状态归因于似乎与人互动的非人类物体。例如,Johnson等人(1999)指出,12个月大的婴儿确实会对非人类但具有互动性的东西进行凝视跟随。这些结果表明,人与物体之间的互动是心理归因的关键因素,然而,有趣的问题仍有待回答:婴儿是否也期望机器人与人交流?在这项研究中,我们调查了6个月大的婴儿是否期望实验者与人形机器人“Robovie”交谈[Ishiguro, et al.,(2001)],使用婴儿的注视时间作为期望违背的测量。违背期望法利用了婴儿的属性,即他们在他们不期望的事件上看的时间比他们期望的事件看的时间长。在测试过程中,我们向婴儿展示演员对机器人和另一个人说话的刺激。如果婴儿将机器人视为像人类一样的交流存在,他们就不会感到惊讶,只要看人就会看机器人。但是,如果婴儿不把交流属性赋予机器人,他们看机器人的时间会比看人的时间长。为了向婴儿展示机器人的行为和与人互动的方式,我们在测试试验之前增加了一段熟悉期,这一阶段为婴儿提供了关于机器人的先验知识。这些条件的熟悉刺激包括:1)机器人交互条件:机器人的行为像人一样,人与机器人相互作用;2)非主动机器人状态:机器人静止不动,人既主动又与机器人交谈;3)主动机器人状态:机器人表现得像人一样,人静止不动,保持沉默。如果机器人的外观对期望起主导作用,则所有条件的结果都是相同的。当机器人的动作占主导地位时,机器人的交互状态和机器人的主动状态的结果是相同的。如果人机交互占主导地位,则机器人交互条件的结果只会不同。结果显示,观看互动机器人的婴儿看机器人的时间和看人的时间一样长。然而,之前观察过其他机器人(非活动机器人和活动机器人)的婴儿看机器人的时间比看人的时间长。先前的一项研究表明,婴儿在将简单的几何物体归类为意向代理之前,就开始将同一个机器人视为意向代理(Kamewari等人)。因此,人们认为早期婴儿具有专门研究外观的认知基础。然而,我们的研究结果表明,早期婴儿对人类交流的外部形式也很敏感,比如“轮流”(Trevarthen, 1980),因此他们通过学习将非人类目标视为交流对象
{"title":"Six-month-old infants' expectations for interactive-humanoid robots","authors":"A. Arita, K. Hiraki, T. Kanda, H. Ishiguro","doi":"10.1109/DEVLRN.2005.1490962","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490962","url":null,"abstract":"Summary form only given. As technology advances, many human-like robots are being developed. These humanoid robots should be classified as inanimate objects; however, they share many properties with human beings. This raises the question of how infants classify them. Developmental psychology has addressed the issue of how infants come to characterize humans as agents having mental states that is indispensable foundation for sociality. Some studies suggest that infants attribute mental states only to humans. For instance, Legerstee et al. (2000) found that 6-month-old infants do expect people to communicate with people, not with objects. These results indicate that human cognition specializes in human in early infancy. Other studies have suggested, however, that infants attribute mental states to non-human objects that appear to be interactive with a person. For instance, Johnson et al. (1999) indicated that 12-month-old infants did gaze following to a non-human but interactive stuff. These results imply that interactivity between humans and objects is the key factor in mental attribution, however, interesting questions remain to be answered: do infants also have expectation for robots to communicate with person? In this study, we investigated whether 6-month-old infants expected an experimenter to talk to a humanoid robot \"Robovie\" [Ishiguro, et al., (2001) using infants' looking time as a measurement of violation-of-expectation. Violation-of-expectation method uses infants' property that they look longer at the event that they do not expect than at the event that they expect. During test trials, we show infants the stimulus in which an actor talks to the robot and another person. If infants regard robots as communicative existence like human, they will not be surprised and look at the robot as long as at the person. But if infants do not attribute communicational property to robots, they will look longer at the robot than at the person. To show infants how the robot behaved and interacted with people, we added a familiarization period prior to the test trials, which phase provided infants with prior knowledge about the robots. The stimuli in the familiarization of these conditions are as follows: 1) interactive robot condition: the robot behaved like a human, and the person and the robot interacted with each other; 2) non-active robot condition: the robot was stationary and the person was both active and talked to the robot; 3) active robot condition: the robot behaved like a human, and the person was stationary and silent. If the robots' appearance is dominant for expectation, the results of all condition are same. If robot' action is dominant, the results of the interactive robot condition and the active robot condition are same. And if human-robot interaction is dominant, the result of the interactive robot condition is only different. In the results, infants who had watched the interactive robot looked at the robot as long as at the person. ","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116153476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490977
R. Morimoto, K. Hashiya
Memory for faces and objects was investigated in 8- to 10-month infants. As the experience for memorizing the target face or object, face-to-face interactions between infant and experimenter in almost natural settings were conducted. To assess memory retention, two-alternative preferential looking tests were done after both a 3-minute delay and a 1-week delay from the familiarization phase. In the 3-minute delay condition, the infants looked more at the novel (not-the-experimenter) face that had not been experienced before, than the familiar (the experimenter) one. This shows that the infants memorize faces from limited experience at least for 3 minutes. On the other hand, the infants showed no such results in the object condition. These results might suggest specific processing for faces that cannot be applied for object stimuli. More detailed examinations should be done to examine this possibility
{"title":"Memory for Faces in Infants: A Comparison to the Memory for Objects","authors":"R. Morimoto, K. Hashiya","doi":"10.1109/DEVLRN.2005.1490977","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490977","url":null,"abstract":"Memory for faces and objects was investigated in 8- to 10-month infants. As the experience for memorizing the target face or object, face-to-face interactions between infant and experimenter in almost natural settings were conducted. To assess memory retention, two-alternative preferential looking tests were done after both a 3-minute delay and a 1-week delay from the familiarization phase. In the 3-minute delay condition, the infants looked more at the novel (not-the-experimenter) face that had not been experienced before, than the familiar (the experimenter) one. This shows that the infants memorize faces from limited experience at least for 3 minutes. On the other hand, the infants showed no such results in the object condition. These results might suggest specific processing for faces that cannot be applied for object stimuli. More detailed examinations should be done to examine this possibility","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128509098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490948
J. Movellan, F. Tanaka, B. Fortenberry, K. Aisaka
Computers are already powerful enough to sustain useful robots that interact and assist humans in every-day life. However progress requires a scientific shakedown in goals and methods not unlike the cognitive revolution that occurred 40 years ago. The document presents the origin and early steps of the RUBI/QRIO project, in which two humanoid robots, RUBI and QRIO, are being brought to an early childhood education center on a daily bases for a period of time of at least one year. The goal of the RUBI/QRIO project is to accelerate progress on everyday life interactive robots by addressing the problem at multiple levels, including the development of new scientific methods, formal approaches, and scientific agenda. The current focus of the project is on educational environments, exploring the ways in which this technology could be used to assist teachers and enrich the educational experiences of children. We describe the origins, philosophy and first steps of the project, which included immersion of the researchers in the Early Childhood Education Center at UCSD, development of a social robot prototype named RUBI, and daily field studies with RUBI and QRIO, a prototype humanoid developed by Sony
{"title":"The RUBI/QRIO Project: Origins, Principles, and First Steps","authors":"J. Movellan, F. Tanaka, B. Fortenberry, K. Aisaka","doi":"10.1109/DEVLRN.2005.1490948","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490948","url":null,"abstract":"Computers are already powerful enough to sustain useful robots that interact and assist humans in every-day life. However progress requires a scientific shakedown in goals and methods not unlike the cognitive revolution that occurred 40 years ago. The document presents the origin and early steps of the RUBI/QRIO project, in which two humanoid robots, RUBI and QRIO, are being brought to an early childhood education center on a daily bases for a period of time of at least one year. The goal of the RUBI/QRIO project is to accelerate progress on everyday life interactive robots by addressing the problem at multiple levels, including the development of new scientific methods, formal approaches, and scientific agenda. The current focus of the project is on educational environments, exploring the ways in which this technology could be used to assist teachers and enrich the educational experiences of children. We describe the origins, philosophy and first steps of the project, which included immersion of the researchers in the Early Childhood Education Center at UCSD, development of a social robot prototype named RUBI, and daily field studies with RUBI and QRIO, a prototype humanoid developed by Sony","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127587257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490979
L. Paletta, G. Fritz, Christin Seifert
Attention is a highly important phenomenon emerging in infant development (Ruff and Rothbart, 1996). In human perception, sequential visual sampling about the environment is mandatory for object recognition purposes. Sequential attention is viewed in the framework of a saccadic decision process that aims at minimizing the uncertainty about the semantic interpretation for object or scene recognition. Methodologically, this work provides a framework for learning sequential attention in real-world visual object recognition, using an architecture of three processing stages. The first stage rejects irrelevant local descriptors providing candidates for foci of interest (FOI). The second stage investigates the information in the FOI using a codebook matcher. The third stage integrates local information via shifts of attention to characterize object discrimination. A Q-learner adapts then from explorative search on the FOI sequences. The methodology is successfully evaluated on representative indoors and outdoors imagery, demonstrating the significant impact of the learning procedures on recognition accuracy and processing time
注意力是婴儿发育过程中出现的一个非常重要的现象(Ruff and Rothbart, 1996)。在人类感知中,对环境进行连续的视觉采样是物体识别的必要条件。顺序注意是在一个旨在减少物体或场景识别的语义解释的不确定性的跳变决策过程的框架中被看待的。在方法上,这项工作提供了一个框架,用于在现实世界的视觉对象识别中学习顺序注意,使用三个处理阶段的架构。第一阶段拒绝为兴趣焦点(FOI)提供候选的不相关的局部描述符。第二阶段使用码本匹配器调查FOI中的信息。第三阶段通过注意力转移整合局部信息来表征目标识别。q -学习者从对FOI序列的探索性搜索中适应它。该方法在代表性的室内和室外图像上进行了成功的评估,证明了学习过程对识别精度和处理时间的显著影响
{"title":"Reinforcement Learning of Informative Attention Patterns for Object Recognition","authors":"L. Paletta, G. Fritz, Christin Seifert","doi":"10.1109/DEVLRN.2005.1490979","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490979","url":null,"abstract":"Attention is a highly important phenomenon emerging in infant development (Ruff and Rothbart, 1996). In human perception, sequential visual sampling about the environment is mandatory for object recognition purposes. Sequential attention is viewed in the framework of a saccadic decision process that aims at minimizing the uncertainty about the semantic interpretation for object or scene recognition. Methodologically, this work provides a framework for learning sequential attention in real-world visual object recognition, using an architecture of three processing stages. The first stage rejects irrelevant local descriptors providing candidates for foci of interest (FOI). The second stage investigates the information in the FOI using a codebook matcher. The third stage integrates local information via shifts of attention to characterize object discrimination. A Q-learner adapts then from explorative search on the FOI sequences. The methodology is successfully evaluated on representative indoors and outdoors imagery, demonstrating the significant impact of the learning procedures on recognition accuracy and processing time","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128246111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490971
M. Okanda, S. Itakura
We investigated whether 1- and 4-month-old infants are sensitive to social contingency from mother and stranger via DV live-replay paradigm. The result indicated that 1-month-old infants could detect mother's non-contingency. Four-month-olds infants might be able to use smile as a social tool to make a stranger's response contingent again. We defined that there are two subdivision components in sensitivity to social contingency such as detection and expectancy. Detection is a basic ability, and expectancy is an ability what infants form to partner's contingency. Development of detection may be earlier than that of expectancy. Those two components are necessary for development of sensitivity to social contingency. Using smile as a social tool is one of applied abilities, and it develops later. We also found that infants' interest in mother and stranger differed in two age groups. One-month-old can only detect mother's unusual responses but not stranger's. By age of 4 months, infants became more sensitive to contingency from strangers because they are interested in strangers more
{"title":"Young Infants' Sensitivity to Social Contingency from Mother and Stranger: Developmental Changes","authors":"M. Okanda, S. Itakura","doi":"10.1109/DEVLRN.2005.1490971","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490971","url":null,"abstract":"We investigated whether 1- and 4-month-old infants are sensitive to social contingency from mother and stranger via DV live-replay paradigm. The result indicated that 1-month-old infants could detect mother's non-contingency. Four-month-olds infants might be able to use smile as a social tool to make a stranger's response contingent again. We defined that there are two subdivision components in sensitivity to social contingency such as detection and expectancy. Detection is a basic ability, and expectancy is an ability what infants form to partner's contingency. Development of detection may be earlier than that of expectancy. Those two components are necessary for development of sensitivity to social contingency. Using smile as a social tool is one of applied abilities, and it develops later. We also found that infants' interest in mother and stranger differed in two age groups. One-month-old can only detect mother's unusual responses but not stranger's. By age of 4 months, infants became more sensitive to contingency from strangers because they are interested in strangers more","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123121498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}