Humans are often faced with an exploration-versus-exploitation trade-off. A commonly used paradigm, multi-armed bandit, has shown humans to exhibit an "uncertainty bonus", which combines with estimated reward to drive exploration. However, previous studies often modeled belief updating using either a Bayesian model that assumed the reward contingency to remain stationary, or a reinforcement learning model. Separately, we previously showed that human learning in the bandit task is best captured by a dynamic-belief Bayesian model. We hypothesize that the estimated uncertainty bonus may depend on which learning model is employed. Here, we re-analyze a bandit dataset using all three learning models. We find that the dynamic-belief model captures human choice behavior best, while also uncovering a much larger uncertainty bonus than the other models. More broadly, our results also emphasize the importance of an appropriate learning model, as it is crucial for correctly characterizing the processes underlying human decision making.
{"title":"Revisiting the Role of Uncertainty-Driven Exploration in a (Perceived) Non-Stationary World.","authors":"Dalin Guo, Angela J Yu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Humans are often faced with an exploration-versus-exploitation trade-off. A commonly used paradigm, multi-armed bandit, has shown humans to exhibit an \"uncertainty bonus\", which combines with estimated reward to drive exploration. However, previous studies often modeled belief updating using either a Bayesian model that assumed the reward contingency to remain stationary, or a reinforcement learning model. Separately, we previously showed that human learning in the bandit task is best captured by a dynamic-belief Bayesian model. We hypothesize that the estimated uncertainty bonus may depend on which learning model is employed. Here, we re-analyze a bandit dataset using all three learning models. We find that the dynamic-belief model captures human choice behavior best, while also uncovering a much larger uncertainty bonus than the other models. More broadly, our results also emphasize the importance of an appropriate learning model, as it is crucial for correctly characterizing the processes underlying human decision making.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"43 ","pages":"2045-2051"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8341546/pdf/nihms-1725387.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39292507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jacob Russin, Roland Fernandez, Hamid Palangi, Eric Rosen, Nebojsa Jojic, Paul Smolensky, Jianfeng Gao
A longstanding question in cognitive science concerns the learning mechanisms underlying compositionality in human cognition. Humans can infer the structured relationships (e.g., grammatical rules) implicit in their sensory observations (e.g., auditory speech), and use this knowledge to guide the composition of simpler meanings into complex wholes. Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations. We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings (e.g., the quantities corresponding to numerals) should be composed according to structured rules (e.g., order of operations). Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
{"title":"Compositional Processing Emerges in Neural Networks Solving Math Problems.","authors":"Jacob Russin, Roland Fernandez, Hamid Palangi, Eric Rosen, Nebojsa Jojic, Paul Smolensky, Jianfeng Gao","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>A longstanding question in cognitive science concerns the learning mechanisms underlying compositionality in human cognition. Humans can infer the structured relationships (e.g., grammatical rules) implicit in their sensory observations (e.g., auditory speech), and use this knowledge to guide the composition of simpler meanings into complex wholes. Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations. We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings (e.g., the quantities corresponding to numerals) should be composed according to structured rules (e.g., order of operations). Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"2021 ","pages":"1767-1773"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8491571/pdf/nihms-1741686.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39516924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Perceiving 3D structure in natural images is an immense computational challenge for the visual system. While many previous studies focused on the perception of rigid 3D objects, we applied a novel method on a common set of non-rigid objects-static images of the human body in the natural world. We investigated to what extent human ability to interpret 3D poses in natural images depends on the typicality of the underlying 3D pose and the informativeness of the viewpoint. Using a novel 2AFC pose matching task, we measured how well subjects were able to match a target natural pose image with one of two comparison, synthetic body images from a different viewpoint-one was rendered with the same 3D pose parameters as the target while the other was a distractor rendered with added noises on joint angles. We found that performance for typical poses was measurably better than atypical poses; however, we found no significant difference between informative and less informative viewpoints. Further comparisons of 2D and 3D pose matching models on the same task showed that 3D body knowledge is particularly important when interpreting images of atypical poses. These results suggested that human ability to interpret 3D poses depends on pose typicality but not viewpoint informativeness, and that humans probably use prior knowledge of 3D pose structures.
{"title":"Three-dimensional pose discrimination in natural images of humans.","authors":"Hongru Zhu, A. Yuille, D. Kersten","doi":"10.1167/jov.21.9.1878","DOIUrl":"https://doi.org/10.1167/jov.21.9.1878","url":null,"abstract":"Perceiving 3D structure in natural images is an immense computational challenge for the visual system. While many previous studies focused on the perception of rigid 3D objects, we applied a novel method on a common set of non-rigid objects-static images of the human body in the natural world. We investigated to what extent human ability to interpret 3D poses in natural images depends on the typicality of the underlying 3D pose and the informativeness of the viewpoint. Using a novel 2AFC pose matching task, we measured how well subjects were able to match a target natural pose image with one of two comparison, synthetic body images from a different viewpoint-one was rendered with the same 3D pose parameters as the target while the other was a distractor rendered with added noises on joint angles. We found that performance for typical poses was measurably better than atypical poses; however, we found no significant difference between informative and less informative viewpoints. Further comparisons of 2D and 3D pose matching models on the same task showed that 3D body knowledge is particularly important when interpreting images of atypical poses. These results suggested that human ability to interpret 3D poses depends on pose typicality but not viewpoint informativeness, and that humans probably use prior knowledge of 3D pose structures.","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"25 1","pages":"223-229"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88208964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Everett Mettler, Timothy Burke, Christine M Massey, Philip J Kellman
Adaptive generation of spacing intervals in learning using response times improves learning relative to both adaptive systems that do not use response times and fixed spacing schemes (Mettler, Massey & Kellman, 2016). Studies have often used limited presentations (e.g., 4) of each learning item. Does adaptive practice benefit learning if items are presented until attainment of objective mastery criteria? Does it matter if mastered items drop out of the active learning set? We compared adaptive and non-adaptive spacing under conditions of mastery and dropout. Experiment 1 compared random presentation order with no dropout to adaptive spacing and mastery using the ARTS (Adaptive Response-time-based Sequencing) system. Adaptive spacing produced better retention than random presentation. Experiment 2 showed clear learning advantages for adaptive spacing compared to random schedules that also included dropout. Adaptive spacing performs better than random schedules of practice, including when learning proceeds to mastery and items drop out when mastered.
{"title":"Comparing Adaptive and Random Spacing Schedules during Learning to Mastery Criteria.","authors":"Everett Mettler, Timothy Burke, Christine M Massey, Philip J Kellman","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Adaptive generation of spacing intervals in learning using response times improves learning relative to both adaptive systems that do not use response times and fixed spacing schemes (Mettler, Massey & Kellman, 2016). Studies have often used limited presentations (e.g., 4) of each learning item. Does adaptive practice benefit learning if items are presented until attainment of objective mastery criteria? Does it matter if mastered items drop out of the active learning set? We compared adaptive and non-adaptive spacing under conditions of mastery and dropout. Experiment 1 compared random presentation order with no dropout to adaptive spacing and mastery using the ARTS (Adaptive Response-time-based Sequencing) system. Adaptive spacing produced better retention than random presentation. Experiment 2 showed clear learning advantages for adaptive spacing compared to random schedules that also included dropout. Adaptive spacing performs better than random schedules of practice, including when learning proceeds to mastery and items drop out when mastered.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":" ","pages":"773-779"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8324179/pdf/nihms-1722428.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39267361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Everett Mettler, Christine M Massey, Amina K El-Ashmawy, Philip J Kellman
Spacing presentations of learning items across time improves memory relative to massed schedules of practice - the well-known spacing effect. Spaced practice can be further enhanced by adaptively scheduling the presentation of learning items to deliver customized spacing intervals for individual items and learners. ARTS - Adaptive Response-time-based Sequencing (Mettler, Massey, & Kellman 2016) determines spacing dynamically in relation to each learner's ongoing speed and accuracy in interactive learning trials. We demonstrate the effectiveness of ARTS when applied to chemistry nomenclature in community college chemistry courses by comparing adaptive schedules to fixed schedules consisting of continuously expanding spacing intervals. Adaptive spacing enhanced the efficiency and durability of learning, with learning gains persisting after a two-week delay and generalizing to a standardized assessment of chemistry knowledge after 2-3 months. Two additional experiments confirmed and extended these results in both laboratory and community college settings.
{"title":"Adaptive vs. Fixed Spacing of Learning Items: Evidence from Studies of Learning and Transfer in Chemistry Education.","authors":"Everett Mettler, Christine M Massey, Amina K El-Ashmawy, Philip J Kellman","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Spacing presentations of learning items across time improves memory relative to massed schedules of practice - the well-known spacing effect. Spaced practice can be further enhanced by adaptively scheduling the presentation of learning items to deliver customized spacing intervals for individual items and learners. ARTS - Adaptive Response-time-based Sequencing (Mettler, Massey, & Kellman 2016) determines spacing dynamically in relation to each learner's ongoing speed and accuracy in interactive learning trials. We demonstrate the effectiveness of ARTS when applied to chemistry nomenclature in community college chemistry courses by comparing adaptive schedules to fixed schedules consisting of continuously expanding spacing intervals. Adaptive spacing enhanced the efficiency and durability of learning, with learning gains persisting after a two-week delay and generalizing to a standardized assessment of chemistry knowledge after 2-3 months. Two additional experiments confirmed and extended these results in both laboratory and community college settings.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":" ","pages":"1598-1604"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8324178/pdf/nihms-1722723.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39267362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Face processing plays a critical role in human social life, from differentiating friends from enemies to choosing a life mate. In this work, we leverage various computer vision techniques, combined with human assessments of similarity between pairs of faces, to investigate human face representation. We find that combining a shape- and texture-feature based model (Active Appearance Model) with a particular form of metric learning, not only achieves the best performance in predicting human similarity judgments on held-out data (both compared to other algorithms and to humans), but also performs better or comparable to alternative approaches in modeling human social trait judgment (e.g. trustworthiness, attractiveness) and affective assessment (e.g. happy, angry, sad). This analysis yields several scientific findings: (1) facial similarity judgments rely on a relative small number of facial features (8-12), (2) race- and gender-informative features play a prominent role in similarity perception, (3) similarity-relevant features alone are insufficient to capture human face representation, in particular some affective features missing from similarity judgments are also necessary for constructing the complete psychological face representation.
{"title":"Leveraging Computer Vision Face Representation to Understand Human Face Representation.","authors":"Chaitanya K Ryali, Xiaotian Wang, Angela J Yu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Face processing plays a critical role in human social life, from differentiating friends from enemies to choosing a life mate. In this work, we leverage various computer vision techniques, combined with human assessments of similarity between pairs of faces, to investigate human face representation. We find that combining a shape- and texture-feature based model (Active Appearance Model) with a particular form of metric learning, not only achieves the best performance in predicting human similarity judgments on held-out data (both compared to other algorithms and to humans), but also performs better or comparable to alternative approaches in modeling human social trait judgment (e.g. trustworthiness, attractiveness) and affective assessment (e.g. happy, angry, sad). This analysis yields several scientific findings: (1) facial similarity judgments rely on a relative small number of facial features (8-12), (2) race- and gender-informative features play a prominent role in similarity perception, (3) similarity-relevant features alone are insufficient to capture human face representation, in particular some affective features missing from similarity judgments are also necessary for constructing the complete psychological face representation.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"42 ","pages":"1080-1086"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8336428/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39280749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Humans frequently overestimate the likelihood of desirable events while underestimating the likelihood of undesirable ones: a phenomenon known as unrealistic optimism. Previously, it was suggested that unrealistic optimism arises from asymmetric belief updating, with a relatively reduced coding of undesirable information. Prior studies have shown that a reinforcement learning (RL) model with asymmetric learning rates (greater for a positive prediction error than a negative prediction error) could account for unrealistic optimism in a bandit task, in particular the tendency of human subjects to persistently choosing a single option when there are multiple equally good options. Here, we propose an alternative explanation of such persistent behavior, by modeling human behavior using a Bayesian hidden Markov model, the Dynamic Belief Model (DBM). We find that DBM captures human choice behavior better than the previously proposed asymmetric RL model. Whereas asymmetric RL attains a measure of optimism by giving better-than-expected outcomes higher learning weights compared to worse-than-expected outcomes, DBM does so by progressively devaluing the unchosen options, thus placing a greater emphasis on choice history independent of reward outcome (e.g. an oft-chosen option might continue to be preferred even if it has not been particularly rewarding), which has broadly been shown to underlie sequential effects in a variety of behavioral settings. Moreover, previous work showed that the devaluation of unchosen options in DBM helps to compensate for a default assumption of environmental non-stationarity, thus allowing the decision-maker to both be more adaptive in changing environments and still obtain near-optimal performance in stationary environments. Thus, the current work suggests both a novel rationale and mechanism for persistent behavior in bandit tasks.
{"title":"Devaluation of Unchosen Options: A Bayesian Account of the Provenance and Maintenance of Overly Optimistic Expectations.","authors":"Corey Yishan Zhou, Dalin Guo, Angela J Yu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Humans frequently overestimate the likelihood of desirable events while underestimating the likelihood of undesirable ones: a phenomenon known as <i>unrealistic optimism</i>. Previously, it was suggested that unrealistic optimism arises from asymmetric belief updating, with a relatively reduced coding of undesirable information. Prior studies have shown that a reinforcement learning (RL) model with asymmetric learning rates (greater for a positive prediction error than a negative prediction error) could account for unrealistic optimism in a bandit task, in particular the tendency of human subjects to persistently choosing a single option when there are multiple equally good options. Here, we propose an alternative explanation of such persistent behavior, by modeling human behavior using a Bayesian hidden Markov model, the Dynamic Belief Model (DBM). We find that DBM captures human choice behavior better than the previously proposed asymmetric RL model. Whereas asymmetric RL attains a measure of optimism by giving better-than-expected outcomes higher learning weights compared to worse-than-expected outcomes, DBM does so by progressively devaluing the unchosen options, thus placing a greater emphasis on <i>choice history</i> independent of reward outcome (e.g. an oft-chosen option might continue to be preferred even if it has not been particularly rewarding), which has broadly been shown to underlie sequential effects in a variety of behavioral settings. Moreover, previous work showed that the devaluation of unchosen options in DBM helps to compensate for a default assumption of environmental non-stationarity, thus allowing the decision-maker to both be more adaptive in changing environments and still obtain near-optimal performance in stationary environments. Thus, the current work suggests both a novel rationale and mechanism for persistent behavior in bandit tasks.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"42 ","pages":"1682-1688"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8336429/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39281170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous research has shown that modified noun phrases (henceforth NPs) are subsequently retrieved faster than unmodified NPs. This effect is often called the "semantic complexity effect". However, little is known about its mechanisms and underlying factors. In this study, we tested whether this effect is truly caused by the semantic information added by the modification, or whether it can be explained by the sheer amount of time that the processor spends expecting or maintaining an NP in the encoding phase. The results showed that time spent expecting or maintaining an NP can explain the effect over and above semantic and/or syntactic complexity. Our results challenge the current memory-based mechanisms for the modification effect such as the "distinctiveness" and "head-reactivation" accounts, and offer new and valuable insight into the memory processes during sentence comprehension.
{"title":"Sheer Time Spent Expecting or Maintaining a Representation Facilitates Subsequent Retrieval during Sentence Processing.","authors":"Hossein Karimi, Michele Diaz, Eva Wittenberg","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Previous research has shown that modified noun phrases (henceforth NPs) are subsequently retrieved faster than unmodified NPs. This effect is often called the \"semantic complexity effect\". However, little is known about its mechanisms and underlying factors. In this study, we tested whether this effect is truly caused by the semantic information added by the modification, or whether it can be explained by the sheer amount of time that the processor spends expecting or maintaining an NP in the encoding phase. The results showed that time spent expecting or maintaining an NP can explain the effect over and above semantic and/or syntactic complexity. Our results challenge the current memory-based mechanisms for the modification effect such as the \"distinctiveness\" and \"head-reactivation\" accounts, and offer new and valuable insight into the memory processes during sentence comprehension.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"2020 ","pages":"2728-2734"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10234091/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9945668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We compared the influence of prior knowledge on visual perception in infants, children, and adults in order to explore the developmental trajectory by which prior knowledge is integrated with new sensory input. Using an identical task across age groups, we tested how participants' accumulated experience affected their ability to judge the relative saturation levels within a pair of sequentially-presented stimuli. We found that infants and children, relative to adults, showed greater influence of the current observation and reduced influence of memory in their perception. In fact, infants and children outperformed adults in discriminating between different levels of saturation, and their performance was less biased by previously-experienced exemplars. Thus, the development of perceptual integration of memory leads to less precise discrimination in the moment, but allows observers to make use of their prior experience in interpreting a complex sensory environment.
{"title":"Memory integration into visual perception in infancy, childhood, and adulthood.","authors":"Sagi Jaffe-Dax, Christine Potter, Tiffany Leung, Casey Lew-Williams, Lauren L Emberson","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We compared the influence of prior knowledge on visual perception in infants, children, and adults in order to explore the developmental trajectory by which prior knowledge is integrated with new sensory input. Using an identical task across age groups, we tested how participants' accumulated experience affected their ability to judge the relative saturation levels within a pair of sequentially-presented stimuli. We found that infants and children, relative to adults, showed greater influence of the current observation and reduced influence of memory in their perception. In fact, infants and children outperformed adults in discriminating between different levels of saturation, and their performance was less biased by previously-experienced exemplars. Thus, the development of perceptual integration of memory leads to less precise discrimination in the moment, but allows observers to make use of their prior experience in interpreting a complex sensory environment.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":" ","pages":"3322-3328"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8455085/pdf/nihms-1619788.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39440494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A key question in early word learning is how infants learn their first object names despite a natural environment thought to provide messy data for linking object names to their referents. Using head cameras worn by 7 to 11-month-old infants in the home, we document the statistics of visual objects, spoken object names, and their co-occurrence in everyday meal time events. We show that the extremely right skewed frequency distribution of visual objects underlies word-referent co-occurrence statistics that set up a clear signal in the noise upon which infants could capitalize to learn their first object names.
{"title":"The everyday statistics of objects and their names: How word learning gets its start.","authors":"Elizabeth M Clerkin, Linda B Smith","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>A key question in early word learning is how infants learn their first object names despite a natural environment thought to provide messy data for linking object names to their referents. Using head cameras worn by 7 to 11-month-old infants in the home, we document the statistics of visual objects, spoken object names, and their co-occurrence in everyday meal time events. We show that the extremely right skewed frequency distribution of visual objects underlies word-referent co-occurrence statistics that set up a clear signal in the noise upon which infants could capitalize to learn their first object names.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"2019 ","pages":"240-246"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8549651/pdf/nihms-1685392.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39573177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}