Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490940
S. Ishikawa, T. Omori
Acquiring several kinds of "models" to recognize the environment and social agents as knowledge and rules to decide one's own next behavior based on the rules is an important aspect of cognitive development. In this study, we investigated the developmental process of children in acquiring and using these "models" in the social world. Children were tested with two experimental TV-game tasks, and we analyzed their behavior precisely from several viewpoints concerning representation of self, recognition of the outer environment, and recognition of the interactive agents. Four and five year-old children showed that they understood contingencies between their own action and visual feedback and memorized the visual environment. Moreover, five and six year olds selected adequate information sources to decide their own next behavior. These data indicated that children developed their own internal "models" for recognizing their environment and making decisions about behavior
{"title":"How to Behave in the Social World: Behavioral Analysis and Modeling for Development of Cognitive Processes","authors":"S. Ishikawa, T. Omori","doi":"10.1109/DEVLRN.2005.1490940","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490940","url":null,"abstract":"Acquiring several kinds of \"models\" to recognize the environment and social agents as knowledge and rules to decide one's own next behavior based on the rules is an important aspect of cognitive development. In this study, we investigated the developmental process of children in acquiring and using these \"models\" in the social world. Children were tested with two experimental TV-game tasks, and we analyzed their behavior precisely from several viewpoints concerning representation of self, recognition of the outer environment, and recognition of the interactive agents. Four and five year-old children showed that they understood contingencies between their own action and visual feedback and memorized the visual environment. Moreover, five and six year olds selected adequate information sources to decide their own next behavior. These data indicated that children developed their own internal \"models\" for recognizing their environment and making decisions about behavior","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129951697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490939
Syogo Yonekura, Max Lungarella, Yasuo Kuniyoshi
We consider how emotions emerge from interaction of brain and body of organism, and its surrounding environment. In particular, we describe design and implementation of a virtual "sea-anemone" in which the interplay of neural and body-environment dynamics leads to the emergence of locomotion, oscillations, and freezing - movement patterns that can be associated with particular emotional states. The neurons composing the neural architecture of our creature are modeled as Hindmarsh-Rose bursting neurons. Our results show that the coupling of neural and body-environment dynamics produces a persistent reflex-induced fear-like response following the collision with objects. In absence of sensory feedback, however, the creature locomotes and the fear-like state disappear. Based on our experimental results, we introduce a novel hypothesis to explain the emergence of primitive emotions. Fear is induced by a conflict between the neural "intention" to locomote and the body-environment-related tendency not to locomote. This study may shed light on the embodied basis of emotional behavior
{"title":"Fear-like response induced by intentional gap between neural and body-environment dynamics","authors":"Syogo Yonekura, Max Lungarella, Yasuo Kuniyoshi","doi":"10.1109/DEVLRN.2005.1490939","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490939","url":null,"abstract":"We consider how emotions emerge from interaction of brain and body of organism, and its surrounding environment. In particular, we describe design and implementation of a virtual \"sea-anemone\" in which the interplay of neural and body-environment dynamics leads to the emergence of locomotion, oscillations, and freezing - movement patterns that can be associated with particular emotional states. The neurons composing the neural architecture of our creature are modeled as Hindmarsh-Rose bursting neurons. Our results show that the coupling of neural and body-environment dynamics produces a persistent reflex-induced fear-like response following the collision with objects. In absence of sensory feedback, however, the creature locomotes and the fear-like state disappear. Based on our experimental results, we introduce a novel hypothesis to explain the emergence of primitive emotions. Fear is induced by a conflict between the neural \"intention\" to locomote and the body-environment-related tendency not to locomote. This study may shed light on the embodied basis of emotional behavior","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121474380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490952
Y. Moriguchi, S. Itakura
This study investigated whether 2-year-old children failed to control actions when they were given wrong social cues by another person in a searching task. In a control session, children observed an experimenter hid a toy in one of three cups. Then they were allowed to search a toy. In a test session, children observed a toy was hidden in one of three cups. Then one experimenter asked the other experimenter to point the cup in which a toy was hidden. After she pointed the wrong cup in which there was no toy; children were allowed to search a toy. Result indicated that children were more likely to fail to search a toy in the test session than in the control session. This suggested that children made a strong representation when they were given wrong social cues by another person and could not inhibit the representation when they search a toy. This is the first study which suggested that children social cognition disturb their controlling actions
{"title":"Does Pointing Comprehension Disturb Controlling Action? :Evidence from 2-year-old children","authors":"Y. Moriguchi, S. Itakura","doi":"10.1109/DEVLRN.2005.1490952","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490952","url":null,"abstract":"This study investigated whether 2-year-old children failed to control actions when they were given wrong social cues by another person in a searching task. In a control session, children observed an experimenter hid a toy in one of three cups. Then they were allowed to search a toy. In a test session, children observed a toy was hidden in one of three cups. Then one experimenter asked the other experimenter to point the cup in which a toy was hidden. After she pointed the wrong cup in which there was no toy; children were allowed to search a toy. Result indicated that children were more likely to fail to search a toy in the test session than in the control session. This suggested that children made a strong representation when they were given wrong social cues by another person and could not inhibit the representation when they search a toy. This is the first study which suggested that children social cognition disturb their controlling actions","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122391887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490970
Birger Johansson Lund
To study and simulate cognitive development, it is useful to find a natural domain where many cognitive functions are needed and, but where the complexity of the environment and task is reasonable. We suggest that childrens games constitute a suitable domain. The ability to play is a very fundamental part of children's daily life and it is crucial for the future development of our behaviors in Bekoff and Byers (1998). In the individual domain, the coordination of the body is an important feature of play. When a child plays, it refines its motor and sensory skills and develops more advanced movement patterns e.g. in running or cycling. When playing among other individuals, cognitive functions for social interactions and coordination and emotional control are also used. Trafton et al. (2003) programmed a robot to model a 3-4 year old child playing hide and seek. The robot moves around in their laboratory and tries to hide behind different objects and learn which hiding places what are good ones. Games like hide and seek, tag and other popular children's games were used. By bringing the best developmental and learning models together, hopes to find out more about the mechanisms of cognitive development
{"title":"It's a Child's Game: Investigating Cognitive Development with Playing Robots","authors":"Birger Johansson Lund","doi":"10.1109/DEVLRN.2005.1490970","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490970","url":null,"abstract":"To study and simulate cognitive development, it is useful to find a natural domain where many cognitive functions are needed and, but where the complexity of the environment and task is reasonable. We suggest that childrens games constitute a suitable domain. The ability to play is a very fundamental part of children's daily life and it is crucial for the future development of our behaviors in Bekoff and Byers (1998). In the individual domain, the coordination of the body is an important feature of play. When a child plays, it refines its motor and sensory skills and develops more advanced movement patterns e.g. in running or cycling. When playing among other individuals, cognitive functions for social interactions and coordination and emotional control are also used. Trafton et al. (2003) programmed a robot to model a 3-4 year old child playing hide and seek. The robot moves around in their laboratory and tries to hide behind different objects and learn which hiding places what are good ones. Games like hide and seek, tag and other popular children's games were used. By bringing the best developmental and learning models together, hopes to find out more about the mechanisms of cognitive development","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129306986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490965
D. Yamamoto, M. Doi, N. Matsuhira, H. Ueda, M. Kidode
Robotic interfaces for home applications need a combination of practical behaviors and familiar behaviors. Recently, robotic interfaces have been attracting attention in view of their potential to serve as intermediaries between complicated information equipment and users. Such an interface requires practical behaviors to execute the user's orders securely, and also familiar behaviors to accept the user's orders exactly. Following the example of Piaget's circular reactions, the robot repeats the user's instructions. For example, when the robot interface receives user's channel change request, it not only changes the channel but also speaks in a friendly manner, e.g. saying "the 4th channel, the 4th channel." The user naturally thinks that the robot understands the request exactly. Based on this concept, a new robotic interface combines practicality and familiarity. In this paper, the effects of familiar behaviors installed in a robotic information home appliance, "ApriAlphatrade", are studied. The experimental results show that the task execution with speech repetition, object gazing and moving to object has the highest score for the familiar factor. Elderly people tend to prefer familiar behaviors more than young people do
{"title":"Familiar Behaviors Evaluation for a Robotic Interface of Practicality and Familiarity","authors":"D. Yamamoto, M. Doi, N. Matsuhira, H. Ueda, M. Kidode","doi":"10.1109/DEVLRN.2005.1490965","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490965","url":null,"abstract":"Robotic interfaces for home applications need a combination of practical behaviors and familiar behaviors. Recently, robotic interfaces have been attracting attention in view of their potential to serve as intermediaries between complicated information equipment and users. Such an interface requires practical behaviors to execute the user's orders securely, and also familiar behaviors to accept the user's orders exactly. Following the example of Piaget's circular reactions, the robot repeats the user's instructions. For example, when the robot interface receives user's channel change request, it not only changes the channel but also speaks in a friendly manner, e.g. saying \"the 4th channel, the 4th channel.\" The user naturally thinks that the robot understands the request exactly. Based on this concept, a new robotic interface combines practicality and familiarity. In this paper, the effects of familiar behaviors installed in a robotic information home appliance, \"ApriAlphatrade\", are studied. The experimental results show that the task execution with speech repetition, object gazing and moving to object has the highest score for the familiar factor. Elderly people tend to prefer familiar behaviors more than young people do","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116606408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490947
S. Takamuku, Yasutake Takahashi, M. Asada
Studies on lexicon acquisition systems are gaining attention in hope for a natural human-robot interface and a test environment for theories of infant lexicon acquisition. This paper presents a system that forms word categories based on object-oriented behaviors. By using physical experiences, the system is able to generalize names to objects with various visual features. The system was implemented to a mobile robot acquiring lexicon about object categories with different rolling preferences. The system successfully acquired the lexicon and generalized the names to objects with various visual features in accordance with their rolling preferences
{"title":"Lexicon Acquisition based on Behavior Learning","authors":"S. Takamuku, Yasutake Takahashi, M. Asada","doi":"10.1109/DEVLRN.2005.1490947","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490947","url":null,"abstract":"Studies on lexicon acquisition systems are gaining attention in hope for a natural human-robot interface and a test environment for theories of infant lexicon acquisition. This paper presents a system that forms word categories based on object-oriented behaviors. By using physical experiences, the system is able to generalize names to objects with various visual features. The system was implemented to a mobile robot acquiring lexicon about object categories with different rolling preferences. The system successfully acquired the lexicon and generalized the names to objects with various visual features in accordance with their rolling preferences","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124677749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490974
K. Sakamoto, T. Onizawa, M. Yano
When spots are presented successively, apparent motions are perceived. Computationally, motion competition in apparent motion is an ill-posed problem and to reveal the underlying mechanisms is an important issue not only in cognitive science but also in computer vision. Our previous studies by Sugiura et al. (1998) revealed how the spatial and temporal factors counterbalance in motion competition and found that the counterbalancing relation was modulated by the directional condition between the competing motions, that is, whether the two motions were directed to the same or opposite direction. However, it was not clear which caused this effect, the directional difference of the motions or laterality, namely, the difference between bi-hemispheric and mono-hemispheric motion processes. Here, we show that this directional modulation is originated from laterality. This fact suggests that cortico-cortical interaction is responsible for spatiotemporal integration for perception of moving objects in complex environments
{"title":"Competition between Spatial and Temporal Factors in Simple Apparent Motion is Modulated by Laterality","authors":"K. Sakamoto, T. Onizawa, M. Yano","doi":"10.1109/DEVLRN.2005.1490974","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490974","url":null,"abstract":"When spots are presented successively, apparent motions are perceived. Computationally, motion competition in apparent motion is an ill-posed problem and to reveal the underlying mechanisms is an important issue not only in cognitive science but also in computer vision. Our previous studies by Sugiura et al. (1998) revealed how the spatial and temporal factors counterbalance in motion competition and found that the counterbalancing relation was modulated by the directional condition between the competing motions, that is, whether the two motions were directed to the same or opposite direction. However, it was not clear which caused this effect, the directional difference of the motions or laterality, namely, the difference between bi-hemispheric and mono-hemispheric motion processes. Here, we show that this directional modulation is originated from laterality. This fact suggests that cortico-cortical interaction is responsible for spatiotemporal integration for perception of moving objects in complex environments","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125160080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490955
A. I. Kovacs, H. Ueno
The fabrication of veritable cognizers relies on an understanding of possible substrates of cognition. Cognitive substrates are subsystems of cognizers, though cognition itself is not a property of them alone but from interaction with the rest of a cognizer and the environment. Ultimately, all cognitive functions including cognitive development and learning need to be explained in terms of processes in cognitive substrates without recourse to logically incoherent reified knowledge, concept structures, behaviors, etc. We describe what cognitive substrates are and where they fit in. We discuss what learning means from the view of cognitive substrates and what can and must be learned by them. We conclude that learning is ongoing change in the system organization of the cognitive substrate itself and that the only representation available in a cognitive substrate is about the innards of the cognizer itself. The only models that can exist in and be learned by a cognitive substrate are not of the external environment or objects and events in it, but of the processes and states the system engages in when interacting with the environment or things in the world
{"title":"Cognitive Substrates: What They Are and What They Learn","authors":"A. I. Kovacs, H. Ueno","doi":"10.1109/DEVLRN.2005.1490955","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490955","url":null,"abstract":"The fabrication of veritable cognizers relies on an understanding of possible substrates of cognition. Cognitive substrates are subsystems of cognizers, though cognition itself is not a property of them alone but from interaction with the rest of a cognizer and the environment. Ultimately, all cognitive functions including cognitive development and learning need to be explained in terms of processes in cognitive substrates without recourse to logically incoherent reified knowledge, concept structures, behaviors, etc. We describe what cognitive substrates are and where they fit in. We discuss what learning means from the view of cognitive substrates and what can and must be learned by them. We conclude that learning is ongoing change in the system organization of the cognitive substrate itself and that the only representation available in a cognitive substrate is about the innards of the cognizer itself. The only models that can exist in and be learned by a cognitive substrate are not of the external environment or objects and events in it, but of the processes and states the system engages in when interacting with the environment or things in the world","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129709738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490986
Heeyoul Choi, Seungjin Choi
In the human brain, it is well known that perception is based on similarity rather than coordinates and it is carried out on the manifold of data set. Isomap (Tenenbaum et al., 2000) is one of widely-used low-dimensional embedding methods where approximate geodesic distance on a weighted graph is used in the framework of classical scaling (metric MDS). In this paper, we consider two critical issues missing in Isomap: (1) generalization property; (2) topological stability and present our robust kernel Isomap method, armed with such two properties. The useful behavior and validity of our robust kernel Isomap, is confirmed through numerical experiments with several data sets including real world data
众所周知,在人脑中,感知是基于相似性而不是坐标的,它是在数据集的流形上进行的。Isomap (Tenenbaum et al., 2000)是一种广泛使用的低维嵌入方法,它在经典尺度(metric MDS)框架中使用加权图上的近似测地线距离。本文考虑了Isomap中缺少的两个关键问题:(1)泛化性质;(2)拓扑稳定性,提出了鲁棒核Isomap方法。通过实际数据集的数值实验,验证了鲁棒核等高图的有效性和实用性
{"title":"Kernel Isomap on Noisy Manifold","authors":"Heeyoul Choi, Seungjin Choi","doi":"10.1109/DEVLRN.2005.1490986","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490986","url":null,"abstract":"In the human brain, it is well known that perception is based on similarity rather than coordinates and it is carried out on the manifold of data set. Isomap (Tenenbaum et al., 2000) is one of widely-used low-dimensional embedding methods where approximate geodesic distance on a weighted graph is used in the framework of classical scaling (metric MDS). In this paper, we consider two critical issues missing in Isomap: (1) generalization property; (2) topological stability and present our robust kernel Isomap method, armed with such two properties. The useful behavior and validity of our robust kernel Isomap, is confirmed through numerical experiments with several data sets including real world data","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121320219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490961
K. An, Dong Hyun Yoo, Sung-Uk Jung, M. Chung
For face tracking in a video sequence, various face tracking algorithms have been proposed. However, most of them have difficulty in finding the initial position and size of a face automatically. In this paper, we present a fast and robust method for fully automatic multi-view face detection and tracking. Using a small number of critical rectangle features selected and trained by the Adaboost learning algorithm, we can detect the initial position, size and view of a face correctly. Once a face is reliably detected, we can extract face and upper body color distribution from the detected facial regions and upper body regions for building robust color modeling respectively. Simultaneously, each color modeling is performed by using k-means clustering and multiple Gaussian models. Then, fast and efficient multi-view face tracking is executed by using several critical features. Our proposed algorithm is robust to rotation, partial occlusions, and scale changes in front of dynamic, unstructured background. In addition, our proposed method is computationally efficient. Therefore, it can be executed in real-time
{"title":"Real-Time Multi-View Face Tracking for Human-Robot Interaction","authors":"K. An, Dong Hyun Yoo, Sung-Uk Jung, M. Chung","doi":"10.1109/DEVLRN.2005.1490961","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490961","url":null,"abstract":"For face tracking in a video sequence, various face tracking algorithms have been proposed. However, most of them have difficulty in finding the initial position and size of a face automatically. In this paper, we present a fast and robust method for fully automatic multi-view face detection and tracking. Using a small number of critical rectangle features selected and trained by the Adaboost learning algorithm, we can detect the initial position, size and view of a face correctly. Once a face is reliably detected, we can extract face and upper body color distribution from the detected facial regions and upper body regions for building robust color modeling respectively. Simultaneously, each color modeling is performed by using k-means clustering and multiple Gaussian models. Then, fast and efficient multi-view face tracking is executed by using several critical features. Our proposed algorithm is robust to rotation, partial occlusions, and scale changes in front of dynamic, unstructured background. In addition, our proposed method is computationally efficient. Therefore, it can be executed in real-time","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123098986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}