Arun Kumar, Zhengwei Wu, Xaq Pitkow, Paul Schrater
Animal behavior is not driven simply by its current observations, but is strongly influenced by internal states. Estimating the structure of these internal states is crucial for understanding the neural basis of behavior. In principle, internal states can be estimated by inverting behavior models, as in inverse model-based Reinforcement Learning. However, this requires careful parameterization and risks model-mismatch to the animal. Here we take a data-driven approach to infer latent states directly from observations of behavior, using a partially observable switching semi-Markov process. This process has two elements critical for capturing animal behavior: it captures non-exponential distribution of times between observations, and transitions between latent states depend on the animal's actions, features that require more complex non-markovian models to represent. To demonstrate the utility of our approach, we apply it to the observations of a simulated optimal agent performing a foraging task, and find that latent dynamics extracted by the model has correspondences with the belief dynamics of the agent. Finally, we apply our model to identify latent states in the behaviors of monkey performing a foraging task, and find clusters of latent states that identify periods of time consistent with expectant waiting. This data-driven behavioral model will be valuable for inferring latent cognitive states, and thereby for measuring neural representations of those states.
{"title":"Belief dynamics extraction.","authors":"Arun Kumar, Zhengwei Wu, Xaq Pitkow, Paul Schrater","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Animal behavior is not driven simply by its current observations, but is strongly influenced by internal states. Estimating the structure of these internal states is crucial for understanding the neural basis of behavior. In principle, internal states can be estimated by inverting behavior models, as in inverse model-based Reinforcement Learning. However, this requires careful parameterization and risks model-mismatch to the animal. Here we take a data-driven approach to infer latent states directly from observations of behavior, using a partially observable switching semi-Markov process. This process has two elements critical for capturing animal behavior: it captures non-exponential distribution of times between observations, and transitions between latent states depend on the animal's actions, features that require more complex non-markovian models to represent. To demonstrate the utility of our approach, we apply it to the observations of a simulated optimal agent performing a foraging task, and find that latent dynamics extracted by the model has correspondences with the belief dynamics of the agent. Finally, we apply our model to identify latent states in the behaviors of monkey performing a foraging task, and find clusters of latent states that identify periods of time consistent with expectant waiting. This data-driven behavioral model will be valuable for inferring latent cognitive states, and thereby for measuring neural representations of those states.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"2019 ","pages":"2058-2064"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7754614/pdf/nihms-1654209.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39103239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hadar Karmazyn Raz, Drew H Abney, David Crandall, Chen Yu, Linda B Smith
Infants are powerful learners. A large corpus of experimental paradigms demonstrate that infants readily learn distributional cues of name-object co-occurrences. But infants' natural learning environment is cluttered: every heard word has multiple competing referents in view. Here we ask how infants start learning name-object co-occurrences in naturalistic learning environments that are cluttered and where there is much visual ambiguity. The framework presented in this paper integrates a naturalistic behavioral study and an application of a machine learning model. Our behavioral findings suggest that in order to start learning object names, infants and their parents consistently select a set of a few objects to play with during a set amount of time. What emerges is a frequency distribution of a few toys that approximates a Zipfian frequency distribution of objects for learning. We find that a machine learning model trained with a Zipf-like distribution of these object images outperformed the model trained with a uniform distribution. Overall, these findings suggest that to overcome referential ambiguity in clutter, infants may be selecting just a few toys allowing them to learn many distributional cues about a few name-object pairs.
{"title":"How do infants start learning object names in a sea of clutter?","authors":"Hadar Karmazyn Raz, Drew H Abney, David Crandall, Chen Yu, Linda B Smith","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Infants are powerful learners. A large corpus of experimental paradigms demonstrate that infants readily learn distributional cues of name-object co-occurrences. But infants' natural learning environment is cluttered: every heard word has multiple competing referents in view. Here we ask how infants start learning name-object co-occurrences in naturalistic learning environments that are cluttered and where there is much visual ambiguity. The framework presented in this paper integrates a naturalistic behavioral study and an application of a machine learning model. Our behavioral findings suggest that in order to start learning object names, infants and their parents consistently select a set of a few objects to play with during a set amount of time. What emerges is a frequency distribution of a few toys that approximates a Zipfian frequency distribution of objects for learning. We find that a machine learning model trained with a Zipf-like distribution of these object images outperformed the model trained with a uniform distribution. Overall, these findings suggest that to overcome referential ambiguity in clutter, infants may be selecting just a few toys allowing them to learn many distributional cues about a few name-object pairs.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"2019 ","pages":"521-526"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7903936/pdf/nihms-1673092.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25406160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Everett Mettler, Austin S Phillips, Christine M Massey, Timothy Burke, Patrick Garrigan, Philip J Kellman
Adaptive learning systems that generate spacing intervals based on learner performance enhance learning efficiency and retention (Mettler, Massey & Kellman, 2016). Recent research in factual learning suggests that initial blocks of passive trials, where learners observe correct answers without overtly responding, produce greater learning than passive or active trials alone (Mettler, Massey, Burke, Garrigan & Kellman, 2018). Here we tested whether this passive + active advantage generalizes beyond factual learning to perceptual learning. Participants studied and classified images of butterfly genera using either: 1) Passive Only presentations, 2) Passive Initial Blocks followed by active, adaptive scheduling, 3) Passive Initial Category Exemplar followed by active, adaptive scheduling, or 4) Active Only learning. We found an advantage for combinations of active and passive presentations over Passive Only or Active Only presentations. Passive trials presented in initial blocks showed the best performance, paralleling earlier findings in factual learning. Combining active and passive learning produces greater learning gains than either alone, and these effects occur for diverse forms of learning, including perceptual learning.
{"title":"The Synergy of Passive and Active Learning Modes in Adaptive Perceptual Learning.","authors":"Everett Mettler, Austin S Phillips, Christine M Massey, Timothy Burke, Patrick Garrigan, Philip J Kellman","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Adaptive learning systems that generate spacing intervals based on learner performance enhance learning efficiency and retention (Mettler, Massey & Kellman, 2016). Recent research in factual learning suggests that initial blocks of passive trials, where learners observe correct answers without overtly responding, produce greater learning than passive or active trials alone (Mettler, Massey, Burke, Garrigan & Kellman, 2018). Here we tested whether this passive + active advantage generalizes beyond factual learning to perceptual learning. Participants studied and classified images of butterfly genera using either: 1) <i>Passive Only</i> presentations, 2) <i>Passive Initial Blocks</i> followed by active, adaptive scheduling, 3) <i>Passive Initial Category Exemplar</i> followed by active, adaptive scheduling, or 4) <i>Active Only</i> learning. We found an advantage for combinations of active and passive presentations over <i>Passive Only</i> or <i>Active Only</i> presentations. Passive trials presented in initial blocks showed the best performance, paralleling earlier findings in factual learning. Combining active and passive learning produces greater learning gains than either alone, and these effects occur for diverse forms of learning, including perceptual learning.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"2019 ","pages":"2351-2357"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10658780/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138178174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Before infants become mature speakers of their native language, they must acquire a robust word-recognition system which allows them to strike the balance between allowing some variation (mood, voice, accent) and recognizing variability that potentially changes meaning (e.g. cat vs hat). The current meta-analysis quantifies how the latter, termed mispronunciation sensitivity, changes over infants first three years, testing competing predictions of mainstream language acquisition theories. Our results show that infants were sensitive to mispronunciations, but accepted them as labels for target objects. Interestingly, and in contrast to predictions of mainstream theories, mispronunciation sensitivity was not modulated by infant age, suggesting that a sufficiently flexible understanding of native language phonology is in place at a young age.
{"title":"A Meta-Analysis of Infants' Mispronunciation Sensitivity Development.","authors":"Katie Von Holzen, Christina Bergmann","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Before infants become mature speakers of their native language, they must acquire a robust word-recognition system which allows them to strike the balance between allowing some variation (mood, voice, accent) and recognizing variability that potentially changes meaning (e.g. cat vs hat). The current meta-analysis quantifies how the latter, termed mispronunciation sensitivity, changes over infants first three years, testing competing predictions of mainstream language acquisition theories. Our results show that infants were sensitive to mispronunciations, but accepted them as labels for target objects. Interestingly, and in contrast to predictions of mainstream theories, mispronunciation sensitivity was not modulated by infant age, suggesting that a sufficiently flexible understanding of native language phonology is in place at a young age.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"2018 ","pages":"1157-1162"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6476320/pdf/nihms-1017988.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41221806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Darrell A Worthy, A Ross Otto, Astin C Cornwall, Hilary J Don, Tyler Davis
The Delta and Decay rules are two learning rules used to update expected values in reinforcement learning (RL) models. The delta rule learns average rewards, whereas the decay rule learns cumulative rewards for each option. Participants learned to select between pairs of options that had reward probabilities of .65 (option A) versus .35 (option B) or .75 (option C) versus .25 (option D) on separate trials in a binary-outcome choice task. Crucially, during training there were twice as AB trials as CD trials, therefore participants experienced more cumulative reward from option A even though option C had a higher average reward rate (.75 versus .65). Participants then decided between novel combinations of options (e.g, A versus C). The Decay model predicted more A choices, but the Delta model predicted more C choices, because those respective options had higher cumulative versus average reward values. Results were more in line with the Decay model's predictions. This suggests that people may retrieve memories of cumulative reward to compute expected value instead of learning average rewards for each option.
Delta规则和衰减规则是用于更新强化学习(RL)模型中期望值的两个学习规则。delta规则学习平均奖励,而衰减规则学习每个选项的累积奖励。在一个二元结果选择任务的单独试验中,参与者学会了在奖励概率为0.65(选项A)对0.35(选项B)或0.75(选项C)对0.25(选项D)的选项对中进行选择。至关重要的是,在训练期间,AB试验是CD试验的两倍,因此,尽管选项C的平均奖励率更高,但参与者从选项A中获得的累积奖励更多。75 vs .65)。然后,参与者在新的选项组合之间做出决定(例如,A还是C)。衰减模型预测更多的A选项,但Delta模型预测更多的C选项,因为这些选项的累积奖励值高于平均奖励值。结果更符合衰变模型的预测。这表明人们可能会检索累积奖励的记忆来计算期望值,而不是学习每个选项的平均奖励。
{"title":"A Case of Divergent Predictions Made by Delta and Decay Rule Learning Models.","authors":"Darrell A Worthy, A Ross Otto, Astin C Cornwall, Hilary J Don, Tyler Davis","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The Delta and Decay rules are two learning rules used to update expected values in reinforcement learning (RL) models. The delta rule learns <i>average</i> rewards, whereas the decay rule learns <i>cumulative</i> rewards for each option. Participants learned to select between pairs of options that had reward probabilities of .65 (option A) versus .35 (option B) or .75 (option C) versus .25 (option D) on separate trials in a binary-outcome choice task. Crucially, during training there were twice as AB trials as CD trials, therefore participants experienced more cumulative reward from option A even though option C had a higher average reward rate (.75 versus .65). Participants then decided between novel combinations of options (e.g, A versus C). The Decay model predicted more A choices, but the Delta model predicted more C choices, because those respective options had higher cumulative versus average reward values. Results were more in line with the Decay model's predictions. This suggests that people may retrieve memories of cumulative reward to compute expected value instead of learning average rewards for each option.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"2018 ","pages":"1175-1180"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8086699/pdf/nihms-997021.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38941524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychologists have used the semantic fluency task for decades to gain insight into the processes and representations underlying memory retrieval. Recent work has suggested that a censored random walk on a semantic network resembles semantic fluency data because it produces optimal foraging. However, fluency data have rich structure beyond being consistent with optimal foraging. Under the assumption that memory can be represented as a semantic network, we test a variety of memory search processes and examine how well these processes capture the richness of fluency data. The search processes we explore vary in the extent they explore the network globally or exploit local clusters, and whether they are strategic. We found that a censored random walk with a priming component best captures the frequency and clustering effects seen in human fluency data.
{"title":"Modeling Semantic Fluency Data as Search on a Semantic Network.","authors":"Jeffrey C Zemla, Joseph L Austerweil","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Psychologists have used the semantic fluency task for decades to gain insight into the processes and representations underlying memory retrieval. Recent work has suggested that a censored random walk on a semantic network resembles semantic fluency data because it produces optimal foraging. However, fluency data have rich structure beyond being consistent with optimal foraging. Under the assumption that memory can be represented as a semantic network, we test a variety of memory search processes and examine how well these processes capture the richness of fluency data. The search processes we explore vary in the extent they explore the network globally or exploit local clusters, and whether they are strategic. We found that a censored random walk with a priming component best captures the frequency and clustering effects seen in human fluency data.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"2017 ","pages":"3646-3651"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5796672/pdf/nihms888346.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35793381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Infants' speech perception adapts to the phonemic categories of their native language, a process assumed to be driven by the distributional properties of speech. This study investigates whether deep neural networks (DNNs), the current state-of-the-art in distributional feature learning, are capable of learning phoneme-like representations of speech in an unsupervised manner. We trained DNNs with unlabeled and labeled speech and analyzed the activations of each layer with respect to the phones in the input segments. The analyses reveal that the emergence of phonemic invariance in DNNs is dependent on the availability of phonemic labeling of the input during the training. No increased phonemic selectivity of the hidden layers was observed in the purely unsupervised networks despite successful learning of low-dimensional representations for speech. This suggests that additional learning constraints or more sophisticated models are needed to account for the emergence of phone-like categories in distributional learning operating on natural speech.
{"title":"Analyzing Distributional Learning of Phonemic Categories in Unsupervised Deep Neural Networks.","authors":"Okko Räsänen, Tasha Nagamine, Nima Mesgarani","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Infants' speech perception adapts to the phonemic categories of their native language, a process assumed to be driven by the distributional properties of speech. This study investigates whether deep neural networks (DNNs), the current state-of-the-art in distributional feature learning, are capable of learning phoneme-like representations of speech in an unsupervised manner. We trained DNNs with unlabeled and labeled speech and analyzed the activations of each layer with respect to the phones in the input segments. The analyses reveal that the emergence of phonemic invariance in DNNs is dependent on the availability of phonemic labeling of the input during the training. No increased phonemic selectivity of the hidden layers was observed in the purely unsupervised networks despite successful learning of low-dimensional representations for speech. This suggests that additional learning constraints or more sophisticated models are needed to account for the emergence of phone-like categories in distributional learning operating on natural speech.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"2016 ","pages":"1757-1762"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5775908/pdf/nihms850015.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35759267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the word-learning domain, both adults and young children are able to find the correct referent of a word from highly ambiguous contexts that involve many words and objects by computing distributional statistics across the co-occurrences of words and referents at multiple naming moments (Yu & Smith, 2007; Smith & Yu, 2008). However, there is still debate regarding how learners accumulate distributional information to learn object labels in natural learning environments, and what underlying learning mechanism learners are most likely to adopt. Using the Human Simulation Paradigm (Gillette, Gleitman, Gleitman & Lederer, 1999), we found that participants' learning performance gradually improved and that their ability to remember and carry over partial knowledge from past learning instances facilitated subsequent learning. These results support the statistical learning model that word learning is a continuous process.
{"title":"Statistical Word Learning is a Continuous Process: Evidence from the Human Simulation Paradigm.","authors":"Yayun Zhang, Daniel Yurovsky, Chen Yu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>In the word-learning domain, both adults and young children are able to find the correct referent of a word from highly ambiguous contexts that involve many words and objects by computing distributional statistics across the co-occurrences of words and referents at multiple naming moments (Yu & Smith, 2007; Smith & Yu, 2008). However, there is still debate regarding how learners accumulate distributional information to learn object labels in natural learning environments, and what underlying learning mechanism learners are most likely to adopt. Using the Human Simulation Paradigm (Gillette, Gleitman, Gleitman & Lederer, 1999), we found that participants' learning performance gradually improved and that their ability to remember and carry over partial knowledge from past learning instances facilitated subsequent learning. These results support the statistical learning model that word learning is a continuous process.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"2015 ","pages":"2793-2798"},"PeriodicalIF":0.0,"publicationDate":"2015-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5722460/pdf/nihms-776011.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35241785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The current study investigated eye-hand coordination in natural reaching. We asked whether the speed of reaching related to the quality of visual information obtained by young children and adults. Participants played with objects on a table while their eye and hand movements were recorded. We developed new techniques to find reaching events in natural activity and to determine how closely participants aligned gaze to objects while reaching. Reaching speed and eye alignment were related for adults but not for children. These results suggest that adults but not children adapt reaching movements according to the quality of visual information (or vice-versa) during natural activity. We discuss possibilities for why this coordination was not observed in children.
{"title":"Visual-motor coordination in natural reaching of young children and adults.","authors":"John M Franchak, Chen Yu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The current study investigated eye-hand coordination in natural reaching. We asked whether the speed of reaching related to the quality of visual information obtained by young children and adults. Participants played with objects on a table while their eye and hand movements were recorded. We developed new techniques to find reaching events in natural activity and to determine how closely participants aligned gaze to objects while reaching. Reaching speed and eye alignment were related for adults but not for children. These results suggest that adults but not children adapt reaching movements according to the quality of visual information (or vice-versa) during natural activity. We discuss possibilities for why this coordination was not observed in children.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"2015 ","pages":"728-733"},"PeriodicalIF":0.0,"publicationDate":"2015-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5722454/pdf/nihms776010.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35241783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An understanding of human collaboration requires a level of analysis that concentrates on sensorimotor behaviors in which the behaviors of social partners continually adjust to and influence each other. A suite of individual differences in partners' ability to both read the social cues of others and to send effective behavioral cues to others create dyad differences in joint attention and joint action. The present paper shows that infant and dyad differences in hand-eye coordination predict dyad differences in joint attention. In the study reported here, 51 toddlers and their parents wore head-mounted eye-trackers as they played together with objects. This method allowed us to track the gaze direction of each participant to determine when they attended to the same object. We found that physically active toddlers align their looking behavior with their parent, and achieve a high proportion of time spent jointly attending to the same object in toy play. However, joint attention bouts in toy play don't depend on gaze following but rather on the coordination of gaze with hand actions on objects. Both infants and parents attend to their partner's object manipulations and in so doing fixate the object visually attended by their partner. Thus, the present results provide evidence for another pathway to joint attention - hand following instead of gaze following. Moreover, dyad differences in joint attention are associated with dyad differences in hand following, and specifically parents' and infants' manual activities on objects and the within- and between-partner coordination of hands and eyes during parent-infant interactions. In particular, infants' manual actions on objects play a critical role in organizing parent-infant joint attention to an object.
{"title":"Linking Joint Attention with Hand-Eye Coordination - A Sensorimotor Approach to Understanding Child-Parent Social Interaction.","authors":"Chen Yu, Linda B Smith","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>An understanding of human collaboration requires a level of analysis that concentrates on sensorimotor behaviors in which the behaviors of social partners continually adjust to and influence each other. A suite of individual differences in partners' ability to both read the social cues of others and to send effective behavioral cues to others create dyad differences in joint attention and joint action. The present paper shows that infant and dyad differences in hand-eye coordination predict dyad differences in joint attention. In the study reported here, 51 toddlers and their parents wore head-mounted eye-trackers as they played together with objects. This method allowed us to track the gaze direction of each participant to determine when they attended to the same object. We found that physically active toddlers align their looking behavior with their parent, and achieve a high proportion of time spent jointly attending to the same object in toy play. However, joint attention bouts in toy play don't depend on gaze following but rather on the coordination of gaze with hand actions on objects. Both infants and parents attend to their partner's object manipulations and in so doing fixate the object visually attended by their partner. Thus, the present results provide evidence for another pathway to joint attention - hand following instead of gaze following. Moreover, dyad differences in joint attention are associated with dyad differences in hand following, and specifically parents' and infants' manual activities on objects and the within- and between-partner coordination of hands and eyes during parent-infant interactions. In particular, infants' manual actions on objects play a critical role in organizing parent-infant joint attention to an object.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"2015 ","pages":"2763-2768"},"PeriodicalIF":0.0,"publicationDate":"2015-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5722468/pdf/nihms776008.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35241784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}