首页 > 最新文献

2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)最新文献

英文 中文
Attitudes towards a handheld robot that learns Proxemics 人们对学习近体术的手持机器人的态度
Chirag Vaswani Bhavnani, Matthias Rolf
Robots that cohabitate in social spaces must abide by the same behavioural cues humans follow, including interpersonal distancing. Proxemics investigates the appropriate distances and the impact of factors affecting it, such as gender and age. This paper investigates people's attitudes towards a robot that can learn Proxemics rules by gauging direct individual feedback from a person, and utilizing it in a reinforcement learning framework. Previous learning attempts have relied on larger robots, for which physical safety is a primary concern. In contrast, our study uses a handheld sized robot that allows us to focus on the impact of distance on engageability in dialogue. General consensus between interviewees was a feeling of ease and safety during interactions, as well as disparity regarding the invasion of personal space, which was influenced by cultural background.
在社交空间中同居的机器人必须遵守与人类相同的行为提示,包括人际距离。近距学研究适当的距离和影响它的因素的影响,如性别和年龄。本文通过测量来自人的直接个人反馈,并在强化学习框架中利用它,研究了人们对能够学习Proxemics规则的机器人的态度。以前的学习尝试依赖于更大的机器人,对它们来说,物理安全是一个主要问题。相比之下,我们的研究使用了一个手持大小的机器人,使我们能够专注于距离对对话参与性的影响。受访者的普遍共识是在互动过程中感到轻松和安全,但对侵犯个人空间的看法存在差异,这受到文化背景的影响。
{"title":"Attitudes towards a handheld robot that learns Proxemics","authors":"Chirag Vaswani Bhavnani, Matthias Rolf","doi":"10.1109/ICDL-EpiRob48136.2020.9278098","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278098","url":null,"abstract":"Robots that cohabitate in social spaces must abide by the same behavioural cues humans follow, including interpersonal distancing. Proxemics investigates the appropriate distances and the impact of factors affecting it, such as gender and age. This paper investigates people's attitudes towards a robot that can learn Proxemics rules by gauging direct individual feedback from a person, and utilizing it in a reinforcement learning framework. Previous learning attempts have relied on larger robots, for which physical safety is a primary concern. In contrast, our study uses a handheld sized robot that allows us to focus on the impact of distance on engageability in dialogue. General consensus between interviewees was a feeling of ease and safety during interactions, as well as disparity regarding the invasion of personal space, which was influenced by cultural background.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122692411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Learning over the Attentional Space with Mobile Robots 移动机器人的注意力空间学习
Letícia M. Berto, L. Rossi, E. Rohmer, P. Costa, A. S. Simões, Ricardo Ribeiro Gudwin, E. Colombini
The advancement of technology has brought many benefits to robotics. Today, it is possible to have robots equipped with many sensors that collect different kinds of information on the environment all time. However, this brings a disadvantage: the increase of information that is received and needs to be processed. This computation is too expensive for robots and is very difficult when it has to be performed online and involves a learning process. Attention is a mechanism that can help us address the most critical data at every moment and is fundamental to improve learning. This paper discusses the importance of attention in the learning process by evaluating the possibility of learning over the attentional space. For this purpose, we modeled in a cognitive architecture the essential cognitive functions necessary to learn and used bottom-up attention as input to a reinforcement learning algorithm. The results show that the robot can learn on attentional and sensorial spaces. By comparing various action schemes, we find the set of actions for successful learning.
技术的进步给机器人带来了许多好处。今天,有可能让机器人配备许多传感器,随时收集有关环境的各种信息。然而,这带来了一个缺点:接收和需要处理的信息增加了。这种计算对于机器人来说太昂贵了,而且当它必须在线执行并且涉及学习过程时非常困难。注意力是一种机制,可以帮助我们随时处理最关键的数据,是提高学习的基础。本文通过评价在注意空间上学习的可能性来讨论注意在学习过程中的重要性。为此,我们在认知架构中建模了学习所需的基本认知功能,并使用自下而上的注意力作为强化学习算法的输入。结果表明,该机器人能够在注意空间和感觉空间上进行学习。通过比较各种行动方案,我们找到了一套成功学习的行动。
{"title":"Learning over the Attentional Space with Mobile Robots","authors":"Letícia M. Berto, L. Rossi, E. Rohmer, P. Costa, A. S. Simões, Ricardo Ribeiro Gudwin, E. Colombini","doi":"10.1109/ICDL-EpiRob48136.2020.9278119","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278119","url":null,"abstract":"The advancement of technology has brought many benefits to robotics. Today, it is possible to have robots equipped with many sensors that collect different kinds of information on the environment all time. However, this brings a disadvantage: the increase of information that is received and needs to be processed. This computation is too expensive for robots and is very difficult when it has to be performed online and involves a learning process. Attention is a mechanism that can help us address the most critical data at every moment and is fundamental to improve learning. This paper discusses the importance of attention in the learning process by evaluating the possibility of learning over the attentional space. For this purpose, we modeled in a cognitive architecture the essential cognitive functions necessary to learn and used bottom-up attention as input to a reinforcement learning algorithm. The results show that the robot can learn on attentional and sensorial spaces. By comparing various action schemes, we find the set of actions for successful learning.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127526546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From human action understanding to robot action execution: how the physical properties of handled objects modulate non-verbal cues 从人类动作理解到机器人动作执行:处理对象的物理特性如何调节非语言提示
N. Duarte, Konstantinos Chatzilygeroudis, J. Santos-Victor, A. Billard
Humans manage to communicate action intentions in a non-verbal way, through body posture and movement. We start from this observation to investigate how a robot can decode a human's non-verbal cues during the manipulation of an object, with specific physical properties, to learn the adequate level of “carefulness” to use when handling that object. We construct dynamical models of the human behaviour using a human-to-human handover dataset consisting of 3 different cups with different levels of fillings. We then included these models into the design of an online classifier that identifies the type of action, based on the human wrist movement. We close the loop from action understanding to robot action execution with an adaptive and robust controller based on the learned classifier, and evaluate the entire pipeline on a collaborative task with a 7-DOF manipulator. Our results show that it is possible to correctly understand the “carefulness” behaviour of humans during object manipulation, even in the pick and place scenario, that was not part of the training set.
人类设法以非语言的方式,通过身体姿势和动作来传达行动意图。我们从这一观察开始,研究机器人如何在操纵具有特定物理特性的物体时解码人类的非语言提示,以学习在处理该物体时使用足够的“小心”程度。我们使用由3个不同填充水平的不同杯子组成的人对人交接数据集构建了人类行为的动态模型。然后,我们将这些模型纳入在线分类器的设计中,该分类器根据人类手腕的运动来识别动作类型。我们利用基于学习分类器的自适应鲁棒控制器完成了从动作理解到机器人动作执行的闭环,并在一个7自由度机械臂的协作任务上对整个流水线进行了评估。我们的结果表明,正确理解人类在物体操作过程中的“谨慎”行为是可能的,即使是在不属于训练集的拾取和放置场景中。
{"title":"From human action understanding to robot action execution: how the physical properties of handled objects modulate non-verbal cues","authors":"N. Duarte, Konstantinos Chatzilygeroudis, J. Santos-Victor, A. Billard","doi":"10.1109/ICDL-EpiRob48136.2020.9278084","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278084","url":null,"abstract":"Humans manage to communicate action intentions in a non-verbal way, through body posture and movement. We start from this observation to investigate how a robot can decode a human's non-verbal cues during the manipulation of an object, with specific physical properties, to learn the adequate level of “carefulness” to use when handling that object. We construct dynamical models of the human behaviour using a human-to-human handover dataset consisting of 3 different cups with different levels of fillings. We then included these models into the design of an online classifier that identifies the type of action, based on the human wrist movement. We close the loop from action understanding to robot action execution with an adaptive and robust controller based on the learned classifier, and evaluate the entire pipeline on a collaborative task with a 7-DOF manipulator. Our results show that it is possible to correctly understand the “carefulness” behaviour of humans during object manipulation, even in the pick and place scenario, that was not part of the training set.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134129732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Language Acquisition with Echo State Networks: Towards Unsupervised Learning 回声状态网络的语言习得:迈向无监督学习
Thanh Trung Dinh, Xavier Hinaut
The modeling of children language acquisition with robots is a long quest paved with pitfalls. Recently a sentence parsing model learning in cross-situational conditions has been proposed: it learns from the robot visual representations. The model, based on random recurrent neural networks (i.e. reservoirs), can achieve significant performance after few hundreds of training examples, more quickly that what a theoretical model could do. In this study, we investigate the developmental plausibility of such model: (i) if it can learn to generalize from single-object sentence to double-object sentence; (ii) if it can use more plausible representations: (ii.a) inputs as sequence of phonemes (instead of words) and (ii.b) outputs fully independent from sentence structure (in order to enable purely unsupervised cross-situational learning). Interestingly, tasks (i) and (ii.a) are solved in a straightforward fashion, whereas task (ii.b) suggest that that learning with tensor representations is a more difficult task
机器人对儿童语言习得的建模是一个充满陷阱的长期探索。最近提出了一种跨情景条件下的句子解析模型学习方法:从机器人的视觉表征中学习。该模型基于随机递归神经网络(即储层),在数百个训练样本后可以获得显著的性能,比理论模型更快。在本研究中,我们考察了该模型的发展合理性:(i)是否能够学习从单宾语句到双宾语句的概括;(ii)如果它可以使用更合理的表示:(ii.a)作为音素序列的输入(而不是单词)和(ii.b)完全独立于句子结构的输出(以便实现纯粹的无监督跨情景学习)。有趣的是,任务(i)和(ii.a)以一种直接的方式解决,而任务(ii.b)表明使用张量表示进行学习是一项更困难的任务
{"title":"Language Acquisition with Echo State Networks: Towards Unsupervised Learning","authors":"Thanh Trung Dinh, Xavier Hinaut","doi":"10.1109/ICDL-EpiRob48136.2020.9278041","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278041","url":null,"abstract":"The modeling of children language acquisition with robots is a long quest paved with pitfalls. Recently a sentence parsing model learning in cross-situational conditions has been proposed: it learns from the robot visual representations. The model, based on random recurrent neural networks (i.e. reservoirs), can achieve significant performance after few hundreds of training examples, more quickly that what a theoretical model could do. In this study, we investigate the developmental plausibility of such model: (i) if it can learn to generalize from single-object sentence to double-object sentence; (ii) if it can use more plausible representations: (ii.a) inputs as sequence of phonemes (instead of words) and (ii.b) outputs fully independent from sentence structure (in order to enable purely unsupervised cross-situational learning). Interestingly, tasks (i) and (ii.a) are solved in a straightforward fashion, whereas task (ii.b) suggest that that learning with tensor representations is a more difficult task","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115402522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
No, Your Other Left! Language Children Use To Direct Robots 不,是你的左边!儿童用来指挥机器人的语言
Deanna Kocher, L. Sarmiento, Samantha Heller, Yupei Yang, T. Kushnir, K. Green
We present an analysis of how children between 4-and 9-years-old give directions to a robot. Thirty-eight children in this age range participated in a direction giving game with a virtual robot and with their caregiver. We considered two different viewpoints (aerial and in-person) and three different affordances (non-humanoid robot, caregiver with eyes closed, and caregiver with eyes open). We report on the frequency of commands that children used, the complexity of the commands, and the navigation styles children used at different ages. We found that pointing and gesturing decreased with age, while “left-right” directions and the use of distances increased with age. From this, we make several recommendations for robot design that would enable a robot to successfully follow directions from children of different ages, and help advance children's direction giving.
我们对4到9岁的孩子如何给机器人指路进行了分析。这个年龄段的38名儿童与虚拟机器人和他们的看护人一起参加了一个指示方向的游戏。我们考虑了两种不同的视角(空中视角和面对面视角)和三种不同的视角(非人形机器人、闭着眼睛的看护者和睁开眼睛的看护者)。我们报告了孩子们使用命令的频率,命令的复杂性,以及孩子们在不同年龄使用的导航风格。我们发现,随着年龄的增长,指指方向和手势的使用会减少,而“左右”方向和距离的使用会随着年龄的增长而增加。由此,我们对机器人的设计提出了几点建议,使机器人能够成功地遵循不同年龄段儿童的指令,并有助于提高儿童的方向给予能力。
{"title":"No, Your Other Left! Language Children Use To Direct Robots","authors":"Deanna Kocher, L. Sarmiento, Samantha Heller, Yupei Yang, T. Kushnir, K. Green","doi":"10.1109/ICDL-EpiRob48136.2020.9278108","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278108","url":null,"abstract":"We present an analysis of how children between 4-and 9-years-old give directions to a robot. Thirty-eight children in this age range participated in a direction giving game with a virtual robot and with their caregiver. We considered two different viewpoints (aerial and in-person) and three different affordances (non-humanoid robot, caregiver with eyes closed, and caregiver with eyes open). We report on the frequency of commands that children used, the complexity of the commands, and the navigation styles children used at different ages. We found that pointing and gesturing decreased with age, while “left-right” directions and the use of distances increased with age. From this, we make several recommendations for robot design that would enable a robot to successfully follow directions from children of different ages, and help advance children's direction giving.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127357563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Humans Perform Social Movements in Response to Social Robot Movements: Motor Intention in Human-Robot Interaction 人类对社交机器人动作的社会反应:人机交互中的运动意向
Ingar Brinck, Lejla Heco, Kajsa Sikström, Victoria Wandsleb, B. Johansson, C. Balkenius
We tested whether the observation of motor action encoding social motor intention would cause the spontaneous processing of a complementary response when performed by a humanoid robot. We designed the robot's arm and upper body movements to manifest the kinematic profiles of human individual and social motor intention and designed a simple task that involved robot and human placing blocks on a table sequentially. Our results show that the behavior of the human can be modulated by human kinematics as encoded in a robot's movement. In several cases human subjects reciprocated movement that displayed social motor intention with movements showing a similar kinematic profile while attempting to make eye contact and engaging in turn-taking behaviour during the task. This suggests a novel approach in the design of HRI based in motor processing that promises to be ecologically valid, cheap, automatic, fast, resilient, intuitive, and computationally simple.
我们测试了对编码社会运动意图的运动动作的观察是否会引起人形机器人进行互补反应的自发加工。我们设计了机器人的手臂和上肢运动,以体现人类个体和社会运动意图的运动学特征,并设计了一个简单的任务,涉及机器人和人类依次将积木放在桌子上。我们的研究结果表明,人类的行为可以通过编码在机器人运动中的人类运动学来调节。在一些情况下,人类受试者在试图进行眼神交流和在任务中进行轮流行为时,会用类似的运动来展示社会运动意图。这提出了一种基于运动处理的HRI设计的新方法,该方法有望在生态上有效,廉价,自动,快速,有弹性,直观,计算简单。
{"title":"Humans Perform Social Movements in Response to Social Robot Movements: Motor Intention in Human-Robot Interaction","authors":"Ingar Brinck, Lejla Heco, Kajsa Sikström, Victoria Wandsleb, B. Johansson, C. Balkenius","doi":"10.1109/ICDL-EpiRob48136.2020.9278114","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278114","url":null,"abstract":"We tested whether the observation of motor action encoding social motor intention would cause the spontaneous processing of a complementary response when performed by a humanoid robot. We designed the robot's arm and upper body movements to manifest the kinematic profiles of human individual and social motor intention and designed a simple task that involved robot and human placing blocks on a table sequentially. Our results show that the behavior of the human can be modulated by human kinematics as encoded in a robot's movement. In several cases human subjects reciprocated movement that displayed social motor intention with movements showing a similar kinematic profile while attempting to make eye contact and engaging in turn-taking behaviour during the task. This suggests a novel approach in the design of HRI based in motor processing that promises to be ecologically valid, cheap, automatic, fast, resilient, intuitive, and computationally simple.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126199032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Picture completion reveals developmental change in representational drawing ability: An analysis using a convolutional neural network 图片完成揭示了具象绘画能力的发展变化:使用卷积神经网络的分析
A. Philippsen, S. Tsuji, Y. Nagai
Drawings of children may provide unique insights into their cognition. Previous research showed that children's ability to draw objects distinctively develops with increasing age. In recent studies, convolutional neural networks have been used as a diagnostic tool to show how the representational ability of children develops. These studies have focused on top-down task modifications by asking a child to draw specific objects. Object representations, however, are influenced by bottom-up visual perception as well as by top-down intentions. Understanding how these processing pathways are integrated and how this integration changes with development is still an open question. In this paper, we investigate how bottom-up modifications of the task affect the representational drawing ability of children. We designed a set of incomplete stimuli and asked children between two and eight years to draw on them without specific task instructions. We found that the higher layers of a deep convolutional neural network pretrained for image classification on objects and scenes well differentiated between different drawing styles (e.g. scribbling vs. meaningful completion), and that older children's drawings were more similar to adult drawings. By analyzing representations of different age groups, we found that older children adapted to variations in the presented stimuli in a more similar way to adults than younger children. Therefore, not only a top-down but also a bottom-up modification of stimuli in a drawing task can reveal differences in how children at different ages represent different concepts. This task design opens up the possibility to investigate representational changes independently of language ability, for example, in children with developmental disorders.
儿童的绘画可以提供对他们认知的独特见解。先前的研究表明,随着年龄的增长,儿童绘制物体的能力会得到明显的发展。在最近的研究中,卷积神经网络被用作一种诊断工具来显示儿童的表征能力是如何发展的。这些研究集中在自上而下的任务修改上,要求孩子画特定的物体。然而,对象表征受到自下而上的视觉感知和自上而下的意图的影响。了解这些处理途径是如何整合的,以及这种整合如何随着发展而变化,仍然是一个悬而未决的问题。在本文中,我们研究了自下而上的任务修改如何影响儿童的表征绘画能力。我们设计了一组不完整的刺激物,让两到八岁的孩子在没有具体任务说明的情况下画出来。我们发现,对物体和场景进行图像分类预训练的深度卷积神经网络的更高层可以很好地区分不同的绘画风格(例如乱涂乱画vs.有意义的完成),年龄较大的儿童绘画与成人绘画更相似。通过分析不同年龄组的表现,我们发现年龄较大的儿童对呈现的刺激变化的适应方式比年龄较小的儿童更类似于成年人。因此,无论是自上而下的还是自下而上的对绘画任务中刺激的修改,都可以揭示不同年龄儿童对不同概念的表达方式的差异。这项任务设计开启了研究独立于语言能力的表征变化的可能性,例如,在患有发育障碍的儿童中。
{"title":"Picture completion reveals developmental change in representational drawing ability: An analysis using a convolutional neural network","authors":"A. Philippsen, S. Tsuji, Y. Nagai","doi":"10.1109/ICDL-EpiRob48136.2020.9278103","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278103","url":null,"abstract":"Drawings of children may provide unique insights into their cognition. Previous research showed that children's ability to draw objects distinctively develops with increasing age. In recent studies, convolutional neural networks have been used as a diagnostic tool to show how the representational ability of children develops. These studies have focused on top-down task modifications by asking a child to draw specific objects. Object representations, however, are influenced by bottom-up visual perception as well as by top-down intentions. Understanding how these processing pathways are integrated and how this integration changes with development is still an open question. In this paper, we investigate how bottom-up modifications of the task affect the representational drawing ability of children. We designed a set of incomplete stimuli and asked children between two and eight years to draw on them without specific task instructions. We found that the higher layers of a deep convolutional neural network pretrained for image classification on objects and scenes well differentiated between different drawing styles (e.g. scribbling vs. meaningful completion), and that older children's drawings were more similar to adult drawings. By analyzing representations of different age groups, we found that older children adapted to variations in the presented stimuli in a more similar way to adults than younger children. Therefore, not only a top-down but also a bottom-up modification of stimuli in a drawing task can reveal differences in how children at different ages represent different concepts. This task design opens up the possibility to investigate representational changes independently of language ability, for example, in children with developmental disorders.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"57 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116150697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Conscious Intelligence Requires Developmental Autonomous Programming For General Purposes 有意识的智能需要为一般目的发展自主编程
J. Weng
Universal Turing Machines are well known in computer science but they are about manual programming for general purposes. Although human children perform conscious learning (learning while being conscious) from infancy, it is unknown that Universal Turing Machines can facilitate not only our understanding of Autonomous Programming For General Purposes (APFGP) by machines, but also enable early-age conscious learning. This work reports a new kind of AI-conscious learning AI from a machine's “baby” time. Instead of arguing what static tasks a conscious machine should be able to do during its “adulthood”, this work suggests that APFGP is a computationally clearer and necessary criterion for us to judge whether a machine is capable of conscious learning so that it can autonomously acquire skills along its “career path”. The results here report new concepts and experimental studies for early vision, audition, natural language understanding, and emotion, with conscious learning capabilities that are absent from traditional AI systems.
通用图灵机在计算机科学中是众所周知的,但它们是关于一般用途的手动编程。虽然人类儿童从婴儿期开始进行有意识的学习(在有意识的情况下学习),但我们不知道通用图灵机不仅可以促进我们对机器的通用自治编程(APFGP)的理解,而且还可以实现早期的有意识学习。这项工作报告了一种新的人工智能意识学习人工智能从机器的“婴儿”时间。与其争论有意识的机器在其“成年期”应该能够完成哪些静态任务,这项工作表明,APFGP是一个计算上更清晰和必要的标准,用于我们判断机器是否能够有意识地学习,以便它能够在其“职业道路”上自主获得技能。这里的结果报告了早期视觉,听觉,自然语言理解和情感的新概念和实验研究,具有传统人工智能系统所缺乏的有意识学习能力。
{"title":"Conscious Intelligence Requires Developmental Autonomous Programming For General Purposes","authors":"J. Weng","doi":"10.1109/ICDL-EpiRob48136.2020.9278077","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278077","url":null,"abstract":"Universal Turing Machines are well known in computer science but they are about manual programming for general purposes. Although human children perform conscious learning (learning while being conscious) from infancy, it is unknown that Universal Turing Machines can facilitate not only our understanding of Autonomous Programming For General Purposes (APFGP) by machines, but also enable early-age conscious learning. This work reports a new kind of AI-conscious learning AI from a machine's “baby” time. Instead of arguing what static tasks a conscious machine should be able to do during its “adulthood”, this work suggests that APFGP is a computationally clearer and necessary criterion for us to judge whether a machine is capable of conscious learning so that it can autonomously acquire skills along its “career path”. The results here report new concepts and experimental studies for early vision, audition, natural language understanding, and emotion, with conscious learning capabilities that are absent from traditional AI systems.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130849981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Towards a cognitive architecture for self-supervised transfer learning for objects detection with a Humanoid Robot 基于自监督迁移学习的类人机器人目标检测认知结构研究
Jonas Gonzalez-Billandon, A. Sciutti, G. Sandini, F. Rea
Robots are becoming more and more present in our daily life operating in complex and unstructured environments. To operate autonomously they must adapt to continuous scene changes and therefore must rely on an incessant learning process. Deep learning methods have reached state-of-the-art results in several domains like computer vision and natural language processing. The success of these deep networks relies on large representative datasets used for training and testing. But one limitation of this approach is the sensitivity of these networks to the dataset they were trained on. These networks perform well as long as the training set is a realistic representation of the contextual scenario. For robotic applications, it is difficult to represent in one dataset all the different environments the robot will encounter. On the other hand, a robot has the advantage to act and to perceive in the complex environment. As a consequence when interacting with humans it can acquire a substantial amount of relevant data, that can be used to perform learning. The challenge we addressed in this work is to propose a computational architecture that allows a robot to learn autonomously from its sensors when learning is supported by an interactive human. We took inspiration on the early development of humans and test our framework on the task of localisation and recognition of objects. We evaluated our framework with the humanoid robot iCub in the experimental context of a realistic interactive scenario. The human subject naturally interacted with the robot showing objects to the iCub without supervision in the labelling. We demonstrated that our architecture can be used to successfully perform transfer learning for an object localisation network with limited human supervision and can be considered a possible enhancement of traditional learning methods for robotics.
机器人越来越多地出现在我们的日常生活中,在复杂和非结构化的环境中工作。为了自主操作,它们必须适应不断变化的场景,因此必须依赖于不断的学习过程。深度学习方法在计算机视觉和自然语言处理等多个领域取得了最先进的成果。这些深度网络的成功依赖于用于训练和测试的大型代表性数据集。但是这种方法的一个限制是这些网络对它们所训练的数据集的敏感性。只要训练集是上下文场景的真实表示,这些网络就会表现良好。对于机器人应用,很难在一个数据集中表示机器人将遇到的所有不同环境。另一方面,机器人具有在复杂环境中行动和感知的优势。因此,当与人类互动时,它可以获得大量的相关数据,这些数据可以用来进行学习。我们在这项工作中解决的挑战是提出一种计算架构,允许机器人在交互式人类的支持下从传感器自主学习。我们从人类的早期发展中获得灵感,并在定位和识别物体的任务上测试了我们的框架。我们在一个真实的交互场景的实验背景下,用仿人机器人iCub评估了我们的框架。人类受试者自然地与机器人互动,在没有标签监督的情况下向iCub展示物体。我们证明了我们的架构可以在有限的人类监督下成功地执行对象定位网络的迁移学习,并且可以被认为是机器人传统学习方法的可能增强。
{"title":"Towards a cognitive architecture for self-supervised transfer learning for objects detection with a Humanoid Robot","authors":"Jonas Gonzalez-Billandon, A. Sciutti, G. Sandini, F. Rea","doi":"10.1109/ICDL-EpiRob48136.2020.9278078","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278078","url":null,"abstract":"Robots are becoming more and more present in our daily life operating in complex and unstructured environments. To operate autonomously they must adapt to continuous scene changes and therefore must rely on an incessant learning process. Deep learning methods have reached state-of-the-art results in several domains like computer vision and natural language processing. The success of these deep networks relies on large representative datasets used for training and testing. But one limitation of this approach is the sensitivity of these networks to the dataset they were trained on. These networks perform well as long as the training set is a realistic representation of the contextual scenario. For robotic applications, it is difficult to represent in one dataset all the different environments the robot will encounter. On the other hand, a robot has the advantage to act and to perceive in the complex environment. As a consequence when interacting with humans it can acquire a substantial amount of relevant data, that can be used to perform learning. The challenge we addressed in this work is to propose a computational architecture that allows a robot to learn autonomously from its sensors when learning is supported by an interactive human. We took inspiration on the early development of humans and test our framework on the task of localisation and recognition of objects. We evaluated our framework with the humanoid robot iCub in the experimental context of a realistic interactive scenario. The human subject naturally interacted with the robot showing objects to the iCub without supervision in the labelling. We demonstrated that our architecture can be used to successfully perform transfer learning for an object localisation network with limited human supervision and can be considered a possible enhancement of traditional learning methods for robotics.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121883968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Motor Habituation: Theory and Experiment 运动习惯化:理论与实验
Sophie Aerdker, Jing Feng, G. Schöner
Habituation is the phenomenon that responses to a stimulus weaken over repetitions. Because habituation is selective to the stimulus, it can be used to assess infant perception and cognition. Novelty preference is observed as dishabituation to stimuli that are sufficiently different from the stimulus to which an infant was first habituated. In many cases, there is also evidence for familiarity preference observed early during habituation. In motor development, perseveration, selecting a previously experienced movement over a novel one, is commonly observed. Perseveration may be thought of as analogous to familiarity preference. Is there also habituation to movement and does it induce novelty preference, observed as motor dishabituation? We apply the experimental paradigm of habituation to a motor task and provide experimental evidence for motor habituation, disha-bituation and Spencer-Thompson dishabituation. We account for this data in a neural dynamic model that unifies previous neural dynamic accounts for habituation and perseveration.
习惯化是一种现象,对刺激的反应会随着重复而减弱。由于习惯化对刺激具有选择性,因此可以用来评估婴儿的感知和认知。新奇偏好被观察为对刺激物的不习惯,这些刺激物与婴儿最初习惯的刺激物有足够的不同。在许多情况下,也有证据表明,在习惯化早期就观察到熟悉偏好。在运动发育过程中,通常可以观察到持续性,即选择以前经历过的动作而不是新的动作。坚持可以被认为是类似于熟悉偏好。是否也存在对运动的习惯化,它是否会诱发对新事物的偏好,即运动不习惯化?我们将习惯化的实验范式应用于运动任务,并为运动习惯化、解除习惯化和斯宾塞-汤普森不习惯化提供实验证据。我们在一个神经动力学模型中解释了这些数据,该模型统一了以前的神经动力学对习惯化和持久性的解释。
{"title":"Motor Habituation: Theory and Experiment","authors":"Sophie Aerdker, Jing Feng, G. Schöner","doi":"10.1109/ICDL-EpiRob48136.2020.9278068","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278068","url":null,"abstract":"Habituation is the phenomenon that responses to a stimulus weaken over repetitions. Because habituation is selective to the stimulus, it can be used to assess infant perception and cognition. Novelty preference is observed as dishabituation to stimuli that are sufficiently different from the stimulus to which an infant was first habituated. In many cases, there is also evidence for familiarity preference observed early during habituation. In motor development, perseveration, selecting a previously experienced movement over a novel one, is commonly observed. Perseveration may be thought of as analogous to familiarity preference. Is there also habituation to movement and does it induce novelty preference, observed as motor dishabituation? We apply the experimental paradigm of habituation to a motor task and provide experimental evidence for motor habituation, disha-bituation and Spencer-Thompson dishabituation. We account for this data in a neural dynamic model that unifies previous neural dynamic accounts for habituation and perseveration.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125022773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1