Pub Date : 2020-10-26DOI: 10.1109/ICDL-EpiRob48136.2020.9278098
Chirag Vaswani Bhavnani, Matthias Rolf
Robots that cohabitate in social spaces must abide by the same behavioural cues humans follow, including interpersonal distancing. Proxemics investigates the appropriate distances and the impact of factors affecting it, such as gender and age. This paper investigates people's attitudes towards a robot that can learn Proxemics rules by gauging direct individual feedback from a person, and utilizing it in a reinforcement learning framework. Previous learning attempts have relied on larger robots, for which physical safety is a primary concern. In contrast, our study uses a handheld sized robot that allows us to focus on the impact of distance on engageability in dialogue. General consensus between interviewees was a feeling of ease and safety during interactions, as well as disparity regarding the invasion of personal space, which was influenced by cultural background.
{"title":"Attitudes towards a handheld robot that learns Proxemics","authors":"Chirag Vaswani Bhavnani, Matthias Rolf","doi":"10.1109/ICDL-EpiRob48136.2020.9278098","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278098","url":null,"abstract":"Robots that cohabitate in social spaces must abide by the same behavioural cues humans follow, including interpersonal distancing. Proxemics investigates the appropriate distances and the impact of factors affecting it, such as gender and age. This paper investigates people's attitudes towards a robot that can learn Proxemics rules by gauging direct individual feedback from a person, and utilizing it in a reinforcement learning framework. Previous learning attempts have relied on larger robots, for which physical safety is a primary concern. In contrast, our study uses a handheld sized robot that allows us to focus on the impact of distance on engageability in dialogue. General consensus between interviewees was a feeling of ease and safety during interactions, as well as disparity regarding the invasion of personal space, which was influenced by cultural background.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122692411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-26DOI: 10.1109/ICDL-EpiRob48136.2020.9278119
Letícia M. Berto, L. Rossi, E. Rohmer, P. Costa, A. S. Simões, Ricardo Ribeiro Gudwin, E. Colombini
The advancement of technology has brought many benefits to robotics. Today, it is possible to have robots equipped with many sensors that collect different kinds of information on the environment all time. However, this brings a disadvantage: the increase of information that is received and needs to be processed. This computation is too expensive for robots and is very difficult when it has to be performed online and involves a learning process. Attention is a mechanism that can help us address the most critical data at every moment and is fundamental to improve learning. This paper discusses the importance of attention in the learning process by evaluating the possibility of learning over the attentional space. For this purpose, we modeled in a cognitive architecture the essential cognitive functions necessary to learn and used bottom-up attention as input to a reinforcement learning algorithm. The results show that the robot can learn on attentional and sensorial spaces. By comparing various action schemes, we find the set of actions for successful learning.
{"title":"Learning over the Attentional Space with Mobile Robots","authors":"Letícia M. Berto, L. Rossi, E. Rohmer, P. Costa, A. S. Simões, Ricardo Ribeiro Gudwin, E. Colombini","doi":"10.1109/ICDL-EpiRob48136.2020.9278119","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278119","url":null,"abstract":"The advancement of technology has brought many benefits to robotics. Today, it is possible to have robots equipped with many sensors that collect different kinds of information on the environment all time. However, this brings a disadvantage: the increase of information that is received and needs to be processed. This computation is too expensive for robots and is very difficult when it has to be performed online and involves a learning process. Attention is a mechanism that can help us address the most critical data at every moment and is fundamental to improve learning. This paper discusses the importance of attention in the learning process by evaluating the possibility of learning over the attentional space. For this purpose, we modeled in a cognitive architecture the essential cognitive functions necessary to learn and used bottom-up attention as input to a reinforcement learning algorithm. The results show that the robot can learn on attentional and sensorial spaces. By comparing various action schemes, we find the set of actions for successful learning.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127526546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-26DOI: 10.1109/ICDL-EpiRob48136.2020.9278084
N. Duarte, Konstantinos Chatzilygeroudis, J. Santos-Victor, A. Billard
Humans manage to communicate action intentions in a non-verbal way, through body posture and movement. We start from this observation to investigate how a robot can decode a human's non-verbal cues during the manipulation of an object, with specific physical properties, to learn the adequate level of “carefulness” to use when handling that object. We construct dynamical models of the human behaviour using a human-to-human handover dataset consisting of 3 different cups with different levels of fillings. We then included these models into the design of an online classifier that identifies the type of action, based on the human wrist movement. We close the loop from action understanding to robot action execution with an adaptive and robust controller based on the learned classifier, and evaluate the entire pipeline on a collaborative task with a 7-DOF manipulator. Our results show that it is possible to correctly understand the “carefulness” behaviour of humans during object manipulation, even in the pick and place scenario, that was not part of the training set.
{"title":"From human action understanding to robot action execution: how the physical properties of handled objects modulate non-verbal cues","authors":"N. Duarte, Konstantinos Chatzilygeroudis, J. Santos-Victor, A. Billard","doi":"10.1109/ICDL-EpiRob48136.2020.9278084","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278084","url":null,"abstract":"Humans manage to communicate action intentions in a non-verbal way, through body posture and movement. We start from this observation to investigate how a robot can decode a human's non-verbal cues during the manipulation of an object, with specific physical properties, to learn the adequate level of “carefulness” to use when handling that object. We construct dynamical models of the human behaviour using a human-to-human handover dataset consisting of 3 different cups with different levels of fillings. We then included these models into the design of an online classifier that identifies the type of action, based on the human wrist movement. We close the loop from action understanding to robot action execution with an adaptive and robust controller based on the learned classifier, and evaluate the entire pipeline on a collaborative task with a 7-DOF manipulator. Our results show that it is possible to correctly understand the “carefulness” behaviour of humans during object manipulation, even in the pick and place scenario, that was not part of the training set.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134129732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-26DOI: 10.1109/ICDL-EpiRob48136.2020.9278041
Thanh Trung Dinh, Xavier Hinaut
The modeling of children language acquisition with robots is a long quest paved with pitfalls. Recently a sentence parsing model learning in cross-situational conditions has been proposed: it learns from the robot visual representations. The model, based on random recurrent neural networks (i.e. reservoirs), can achieve significant performance after few hundreds of training examples, more quickly that what a theoretical model could do. In this study, we investigate the developmental plausibility of such model: (i) if it can learn to generalize from single-object sentence to double-object sentence; (ii) if it can use more plausible representations: (ii.a) inputs as sequence of phonemes (instead of words) and (ii.b) outputs fully independent from sentence structure (in order to enable purely unsupervised cross-situational learning). Interestingly, tasks (i) and (ii.a) are solved in a straightforward fashion, whereas task (ii.b) suggest that that learning with tensor representations is a more difficult task
{"title":"Language Acquisition with Echo State Networks: Towards Unsupervised Learning","authors":"Thanh Trung Dinh, Xavier Hinaut","doi":"10.1109/ICDL-EpiRob48136.2020.9278041","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278041","url":null,"abstract":"The modeling of children language acquisition with robots is a long quest paved with pitfalls. Recently a sentence parsing model learning in cross-situational conditions has been proposed: it learns from the robot visual representations. The model, based on random recurrent neural networks (i.e. reservoirs), can achieve significant performance after few hundreds of training examples, more quickly that what a theoretical model could do. In this study, we investigate the developmental plausibility of such model: (i) if it can learn to generalize from single-object sentence to double-object sentence; (ii) if it can use more plausible representations: (ii.a) inputs as sequence of phonemes (instead of words) and (ii.b) outputs fully independent from sentence structure (in order to enable purely unsupervised cross-situational learning). Interestingly, tasks (i) and (ii.a) are solved in a straightforward fashion, whereas task (ii.b) suggest that that learning with tensor representations is a more difficult task","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115402522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-26DOI: 10.1109/ICDL-EpiRob48136.2020.9278108
Deanna Kocher, L. Sarmiento, Samantha Heller, Yupei Yang, T. Kushnir, K. Green
We present an analysis of how children between 4-and 9-years-old give directions to a robot. Thirty-eight children in this age range participated in a direction giving game with a virtual robot and with their caregiver. We considered two different viewpoints (aerial and in-person) and three different affordances (non-humanoid robot, caregiver with eyes closed, and caregiver with eyes open). We report on the frequency of commands that children used, the complexity of the commands, and the navigation styles children used at different ages. We found that pointing and gesturing decreased with age, while “left-right” directions and the use of distances increased with age. From this, we make several recommendations for robot design that would enable a robot to successfully follow directions from children of different ages, and help advance children's direction giving.
{"title":"No, Your Other Left! Language Children Use To Direct Robots","authors":"Deanna Kocher, L. Sarmiento, Samantha Heller, Yupei Yang, T. Kushnir, K. Green","doi":"10.1109/ICDL-EpiRob48136.2020.9278108","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278108","url":null,"abstract":"We present an analysis of how children between 4-and 9-years-old give directions to a robot. Thirty-eight children in this age range participated in a direction giving game with a virtual robot and with their caregiver. We considered two different viewpoints (aerial and in-person) and three different affordances (non-humanoid robot, caregiver with eyes closed, and caregiver with eyes open). We report on the frequency of commands that children used, the complexity of the commands, and the navigation styles children used at different ages. We found that pointing and gesturing decreased with age, while “left-right” directions and the use of distances increased with age. From this, we make several recommendations for robot design that would enable a robot to successfully follow directions from children of different ages, and help advance children's direction giving.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127357563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-26DOI: 10.1109/ICDL-EpiRob48136.2020.9278114
Ingar Brinck, Lejla Heco, Kajsa Sikström, Victoria Wandsleb, B. Johansson, C. Balkenius
We tested whether the observation of motor action encoding social motor intention would cause the spontaneous processing of a complementary response when performed by a humanoid robot. We designed the robot's arm and upper body movements to manifest the kinematic profiles of human individual and social motor intention and designed a simple task that involved robot and human placing blocks on a table sequentially. Our results show that the behavior of the human can be modulated by human kinematics as encoded in a robot's movement. In several cases human subjects reciprocated movement that displayed social motor intention with movements showing a similar kinematic profile while attempting to make eye contact and engaging in turn-taking behaviour during the task. This suggests a novel approach in the design of HRI based in motor processing that promises to be ecologically valid, cheap, automatic, fast, resilient, intuitive, and computationally simple.
{"title":"Humans Perform Social Movements in Response to Social Robot Movements: Motor Intention in Human-Robot Interaction","authors":"Ingar Brinck, Lejla Heco, Kajsa Sikström, Victoria Wandsleb, B. Johansson, C. Balkenius","doi":"10.1109/ICDL-EpiRob48136.2020.9278114","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278114","url":null,"abstract":"We tested whether the observation of motor action encoding social motor intention would cause the spontaneous processing of a complementary response when performed by a humanoid robot. We designed the robot's arm and upper body movements to manifest the kinematic profiles of human individual and social motor intention and designed a simple task that involved robot and human placing blocks on a table sequentially. Our results show that the behavior of the human can be modulated by human kinematics as encoded in a robot's movement. In several cases human subjects reciprocated movement that displayed social motor intention with movements showing a similar kinematic profile while attempting to make eye contact and engaging in turn-taking behaviour during the task. This suggests a novel approach in the design of HRI based in motor processing that promises to be ecologically valid, cheap, automatic, fast, resilient, intuitive, and computationally simple.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126199032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-26DOI: 10.1109/ICDL-EpiRob48136.2020.9278103
A. Philippsen, S. Tsuji, Y. Nagai
Drawings of children may provide unique insights into their cognition. Previous research showed that children's ability to draw objects distinctively develops with increasing age. In recent studies, convolutional neural networks have been used as a diagnostic tool to show how the representational ability of children develops. These studies have focused on top-down task modifications by asking a child to draw specific objects. Object representations, however, are influenced by bottom-up visual perception as well as by top-down intentions. Understanding how these processing pathways are integrated and how this integration changes with development is still an open question. In this paper, we investigate how bottom-up modifications of the task affect the representational drawing ability of children. We designed a set of incomplete stimuli and asked children between two and eight years to draw on them without specific task instructions. We found that the higher layers of a deep convolutional neural network pretrained for image classification on objects and scenes well differentiated between different drawing styles (e.g. scribbling vs. meaningful completion), and that older children's drawings were more similar to adult drawings. By analyzing representations of different age groups, we found that older children adapted to variations in the presented stimuli in a more similar way to adults than younger children. Therefore, not only a top-down but also a bottom-up modification of stimuli in a drawing task can reveal differences in how children at different ages represent different concepts. This task design opens up the possibility to investigate representational changes independently of language ability, for example, in children with developmental disorders.
{"title":"Picture completion reveals developmental change in representational drawing ability: An analysis using a convolutional neural network","authors":"A. Philippsen, S. Tsuji, Y. Nagai","doi":"10.1109/ICDL-EpiRob48136.2020.9278103","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278103","url":null,"abstract":"Drawings of children may provide unique insights into their cognition. Previous research showed that children's ability to draw objects distinctively develops with increasing age. In recent studies, convolutional neural networks have been used as a diagnostic tool to show how the representational ability of children develops. These studies have focused on top-down task modifications by asking a child to draw specific objects. Object representations, however, are influenced by bottom-up visual perception as well as by top-down intentions. Understanding how these processing pathways are integrated and how this integration changes with development is still an open question. In this paper, we investigate how bottom-up modifications of the task affect the representational drawing ability of children. We designed a set of incomplete stimuli and asked children between two and eight years to draw on them without specific task instructions. We found that the higher layers of a deep convolutional neural network pretrained for image classification on objects and scenes well differentiated between different drawing styles (e.g. scribbling vs. meaningful completion), and that older children's drawings were more similar to adult drawings. By analyzing representations of different age groups, we found that older children adapted to variations in the presented stimuli in a more similar way to adults than younger children. Therefore, not only a top-down but also a bottom-up modification of stimuli in a drawing task can reveal differences in how children at different ages represent different concepts. This task design opens up the possibility to investigate representational changes independently of language ability, for example, in children with developmental disorders.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"57 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116150697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-26DOI: 10.1109/ICDL-EpiRob48136.2020.9278077
J. Weng
Universal Turing Machines are well known in computer science but they are about manual programming for general purposes. Although human children perform conscious learning (learning while being conscious) from infancy, it is unknown that Universal Turing Machines can facilitate not only our understanding of Autonomous Programming For General Purposes (APFGP) by machines, but also enable early-age conscious learning. This work reports a new kind of AI-conscious learning AI from a machine's “baby” time. Instead of arguing what static tasks a conscious machine should be able to do during its “adulthood”, this work suggests that APFGP is a computationally clearer and necessary criterion for us to judge whether a machine is capable of conscious learning so that it can autonomously acquire skills along its “career path”. The results here report new concepts and experimental studies for early vision, audition, natural language understanding, and emotion, with conscious learning capabilities that are absent from traditional AI systems.
{"title":"Conscious Intelligence Requires Developmental Autonomous Programming For General Purposes","authors":"J. Weng","doi":"10.1109/ICDL-EpiRob48136.2020.9278077","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278077","url":null,"abstract":"Universal Turing Machines are well known in computer science but they are about manual programming for general purposes. Although human children perform conscious learning (learning while being conscious) from infancy, it is unknown that Universal Turing Machines can facilitate not only our understanding of Autonomous Programming For General Purposes (APFGP) by machines, but also enable early-age conscious learning. This work reports a new kind of AI-conscious learning AI from a machine's “baby” time. Instead of arguing what static tasks a conscious machine should be able to do during its “adulthood”, this work suggests that APFGP is a computationally clearer and necessary criterion for us to judge whether a machine is capable of conscious learning so that it can autonomously acquire skills along its “career path”. The results here report new concepts and experimental studies for early vision, audition, natural language understanding, and emotion, with conscious learning capabilities that are absent from traditional AI systems.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130849981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-26DOI: 10.1109/ICDL-EpiRob48136.2020.9278078
Jonas Gonzalez-Billandon, A. Sciutti, G. Sandini, F. Rea
Robots are becoming more and more present in our daily life operating in complex and unstructured environments. To operate autonomously they must adapt to continuous scene changes and therefore must rely on an incessant learning process. Deep learning methods have reached state-of-the-art results in several domains like computer vision and natural language processing. The success of these deep networks relies on large representative datasets used for training and testing. But one limitation of this approach is the sensitivity of these networks to the dataset they were trained on. These networks perform well as long as the training set is a realistic representation of the contextual scenario. For robotic applications, it is difficult to represent in one dataset all the different environments the robot will encounter. On the other hand, a robot has the advantage to act and to perceive in the complex environment. As a consequence when interacting with humans it can acquire a substantial amount of relevant data, that can be used to perform learning. The challenge we addressed in this work is to propose a computational architecture that allows a robot to learn autonomously from its sensors when learning is supported by an interactive human. We took inspiration on the early development of humans and test our framework on the task of localisation and recognition of objects. We evaluated our framework with the humanoid robot iCub in the experimental context of a realistic interactive scenario. The human subject naturally interacted with the robot showing objects to the iCub without supervision in the labelling. We demonstrated that our architecture can be used to successfully perform transfer learning for an object localisation network with limited human supervision and can be considered a possible enhancement of traditional learning methods for robotics.
{"title":"Towards a cognitive architecture for self-supervised transfer learning for objects detection with a Humanoid Robot","authors":"Jonas Gonzalez-Billandon, A. Sciutti, G. Sandini, F. Rea","doi":"10.1109/ICDL-EpiRob48136.2020.9278078","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278078","url":null,"abstract":"Robots are becoming more and more present in our daily life operating in complex and unstructured environments. To operate autonomously they must adapt to continuous scene changes and therefore must rely on an incessant learning process. Deep learning methods have reached state-of-the-art results in several domains like computer vision and natural language processing. The success of these deep networks relies on large representative datasets used for training and testing. But one limitation of this approach is the sensitivity of these networks to the dataset they were trained on. These networks perform well as long as the training set is a realistic representation of the contextual scenario. For robotic applications, it is difficult to represent in one dataset all the different environments the robot will encounter. On the other hand, a robot has the advantage to act and to perceive in the complex environment. As a consequence when interacting with humans it can acquire a substantial amount of relevant data, that can be used to perform learning. The challenge we addressed in this work is to propose a computational architecture that allows a robot to learn autonomously from its sensors when learning is supported by an interactive human. We took inspiration on the early development of humans and test our framework on the task of localisation and recognition of objects. We evaluated our framework with the humanoid robot iCub in the experimental context of a realistic interactive scenario. The human subject naturally interacted with the robot showing objects to the iCub without supervision in the labelling. We demonstrated that our architecture can be used to successfully perform transfer learning for an object localisation network with limited human supervision and can be considered a possible enhancement of traditional learning methods for robotics.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121883968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-26DOI: 10.1109/ICDL-EpiRob48136.2020.9278068
Sophie Aerdker, Jing Feng, G. Schöner
Habituation is the phenomenon that responses to a stimulus weaken over repetitions. Because habituation is selective to the stimulus, it can be used to assess infant perception and cognition. Novelty preference is observed as dishabituation to stimuli that are sufficiently different from the stimulus to which an infant was first habituated. In many cases, there is also evidence for familiarity preference observed early during habituation. In motor development, perseveration, selecting a previously experienced movement over a novel one, is commonly observed. Perseveration may be thought of as analogous to familiarity preference. Is there also habituation to movement and does it induce novelty preference, observed as motor dishabituation? We apply the experimental paradigm of habituation to a motor task and provide experimental evidence for motor habituation, disha-bituation and Spencer-Thompson dishabituation. We account for this data in a neural dynamic model that unifies previous neural dynamic accounts for habituation and perseveration.
{"title":"Motor Habituation: Theory and Experiment","authors":"Sophie Aerdker, Jing Feng, G. Schöner","doi":"10.1109/ICDL-EpiRob48136.2020.9278068","DOIUrl":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278068","url":null,"abstract":"Habituation is the phenomenon that responses to a stimulus weaken over repetitions. Because habituation is selective to the stimulus, it can be used to assess infant perception and cognition. Novelty preference is observed as dishabituation to stimuli that are sufficiently different from the stimulus to which an infant was first habituated. In many cases, there is also evidence for familiarity preference observed early during habituation. In motor development, perseveration, selecting a previously experienced movement over a novel one, is commonly observed. Perseveration may be thought of as analogous to familiarity preference. Is there also habituation to movement and does it induce novelty preference, observed as motor dishabituation? We apply the experimental paradigm of habituation to a motor task and provide experimental evidence for motor habituation, disha-bituation and Spencer-Thompson dishabituation. We account for this data in a neural dynamic model that unifies previous neural dynamic accounts for habituation and perseveration.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125022773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}