Daniel Mirman, Anna Krason, Malathi Thothathiri, Erica L Middleton
Speech production in aphasia is often described as "effortful", though the consequences of consistent, high degrees of cognitive effort have not been explored. Using recent work on mental effort as a theoretical framework, the present study examined how effort-related fatigue produces decrements in performance in picture naming among participants with post-stroke aphasia. We analyzed three data sets from prior studies where participants completed a large picture naming test. Decreasing naming accuracy across trials was statistically significant in two of the three samples. There were also significant effects of practice (better performance on a second test administration), word frequency (better performance for more frequent words), and word length (better performance for shorter words). These results are the first concrete demonstration of fatigue affecting performance on a language task in post-stroke aphasia. They open a new avenue for research on mental effort/fatigue with potential implications for aphasia assessment, treatment, and management.
{"title":"Effect of Fatigue on Word Production in Aphasia.","authors":"Daniel Mirman, Anna Krason, Malathi Thothathiri, Erica L Middleton","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Speech production in aphasia is often described as \"effortful\", though the consequences of consistent, high degrees of cognitive effort have not been explored. Using recent work on mental effort as a theoretical framework, the present study examined how effort-related fatigue produces decrements in performance in picture naming among participants with post-stroke aphasia. We analyzed three data sets from prior studies where participants completed a large picture naming test. Decreasing naming accuracy across trials was statistically significant in two of the three samples. There were also significant effects of practice (better performance on a second test administration), word frequency (better performance for more frequent words), and word length (better performance for shorter words). These results are the first concrete demonstration of fatigue affecting performance on a language task in post-stroke aphasia. They open a new avenue for research on mental effort/fatigue with potential implications for aphasia assessment, treatment, and management.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"46 ","pages":"2951-2956"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11310858/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141918230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philip J Kellman, Sally Krasne, Christine M Massey, Everett W Mettler
Combining perceptual learning techniques with adaptive learning algorithms has been shown to accelerate the development of expertise in medical and STEM learning domains (Kellman & Massey, 2013; Kellman, Jacoby, Massey & Krasne, 2022). Virtually all adaptive learning systems have relied on simple accuracy data that does not take into account response bias, a problem that may be especially consequential in multi-category perceptual classifications. We investigated whether adaptive perceptual learning in skin cancer screening can be enhanced by incorporating signal detection theory (SDT) methods that separate sensitivity from criterion. SDT-style concepts were used to alter sequencing, and separately to define mastery (category retirement). SDT retirement used a running d' estimate calculated from a recent window of trials based on hit and false alarm rates. Undergraduate participants used a Skin Cancer PALM (perceptual adaptive learning module) to learn classification of 10 cancerous and readily-confused non-cancerous skin lesion types. Four adaptive conditions varied either the type of adaptive sequencing (standard vs. SDT) or retirement criteria (standard vs. SDT). A non-adaptive control condition presented didactic instruction on dermatologic screening in video form, including images, classification schemes, and detailed explanations. All adaptive conditions robustly outperformed the non-adaptive control in both learning efficiency and fluency (large effect sizes). Between adaptive conditions, SDT retirement criteria produced greater learning efficiency than standard, accuracy-based mastery criteria at both immediate and delayed posttests (medium effect sizes). SDT sequencing and standard adaptive sequencing did not differ. SDT enhancements to adaptive perceptual learning procedures have potential to enhance learning efficiency.
{"title":"Connecting Adaptive Perceptual Learning and Signal Detection Theory in Skin Cancer Screening.","authors":"Philip J Kellman, Sally Krasne, Christine M Massey, Everett W Mettler","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Combining perceptual learning techniques with adaptive learning algorithms has been shown to accelerate the development of expertise in medical and STEM learning domains (Kellman & Massey, 2013; Kellman, Jacoby, Massey & Krasne, 2022). Virtually all adaptive learning systems have relied on simple accuracy data that does not take into account response bias, a problem that may be especially consequential in multi-category perceptual classifications. We investigated whether adaptive perceptual learning in skin cancer screening can be enhanced by incorporating signal detection theory (SDT) methods that separate sensitivity from criterion. SDT-style concepts were used to alter sequencing, and separately to define mastery (category retirement). SDT retirement used a running d' estimate calculated from a recent window of trials based on hit and false alarm rates. Undergraduate participants used a Skin Cancer PALM (perceptual adaptive learning module) to learn classification of 10 cancerous and readily-confused non-cancerous skin lesion types. Four adaptive conditions varied either the type of adaptive sequencing (standard vs. SDT) or retirement criteria (standard vs. SDT). A non-adaptive control condition presented didactic instruction on dermatologic screening in video form, including images, classification schemes, and detailed explanations. All adaptive conditions robustly outperformed the non-adaptive control in both learning efficiency and fluency (large effect sizes). Between adaptive conditions, SDT retirement criteria produced greater learning efficiency than standard, accuracy-based mastery criteria at both immediate and delayed posttests (medium effect sizes). SDT sequencing and standard adaptive sequencing did not differ. SDT enhancements to adaptive perceptual learning procedures have potential to enhance learning efficiency.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"45 ","pages":"3251-3258"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10764053/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139089493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Verbs and nouns vary in many ways - including in how they are used in language and in the timing of their early learning. We compare the distribution of semantic features that comprise early-acquired verb and noun meanings. Given overall semantic and syntactic differences between nouns and verbs, we hypothesized that the preference for directly perceptible features observed for nouns would be attenuated for verbs. Building on prior work using semantic features and semantic networks in nouns, we find that compared to early-learned nouns (N = 359), early-learned verbs (N = 103) have meanings disproportionately built from complex information inaccessible to the senses. Further, children's early verb vocabularies (N = 3,804) show semantic relationships strongly shaped by this complex information from the beginning of vocabulary development. Complexity is observed in early verb meanings and is reflected in the vocabularies of children even at the outset of verb learning.
{"title":"Verb vocabularies are shaped by complex meanings from the onset of development.","authors":"Justin B Kueser, Arielle Borovsky","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Verbs and nouns vary in many ways - including in how they are used in language and in the timing of their early learning. We compare the distribution of semantic features that comprise early-acquired verb and noun meanings. Given overall semantic and syntactic differences between nouns and verbs, we hypothesized that the preference for directly perceptible features observed for nouns would be attenuated for verbs. Building on prior work using semantic features and semantic networks in nouns, we find that compared to early-learned nouns (N = 359), early-learned verbs (N = 103) have meanings disproportionately built from complex information inaccessible to the senses. Further, children's early verb vocabularies (N = 3,804) show semantic relationships strongly shaped by this complex information from the beginning of vocabulary development. Complexity is observed in early verb meanings and is reflected in the vocabularies of children even at the outset of verb learning.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"45 ","pages":"130-138"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11142620/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Before they start to talk, infants learn the form and meaning of many common words. In the present work, we investigated the nature of this word knowledge, testing the specificity of very young infants' (6-14 months) phonological representations in an internet-based language-guided-looking task using correct pronunciations and initial-consonant mispronunciations of common words. Across the current sample (n=78 out of 96 pre-registered), infants' proportion looking to the target (named image) versus the distracter was significantly lower when the target word was mispronounced, indicating sensitivity to phonological deviation. Performance patterns varied by age group. The youngest group (6-8 months, n=30) was at chance in both conditions, the middle group (9-11 months, n=21) showed significant recognition of correct pronunciations and a marginal mispronunciation effect, and the oldest age group (12-14 months, n=27) demonstrated the mature pattern: significant recognition and a significant mispronunciation effect. Ongoing work is completing the pre-registered sample size.
{"title":"Very Young Infants' Sensitivity to Consonant Mispronunciations in Word Recognition.","authors":"Caroline Beech, Daniel Swingley","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Before they start to talk, infants learn the form and meaning of many common words. In the present work, we investigated the nature of this word knowledge, testing the specificity of very young infants' (6-14 months) phonological representations in an internet-based language-guided-looking task using correct pronunciations and initial-consonant mispronunciations of common words. Across the current sample (n=78 out of 96 pre-registered), infants' proportion looking to the target (named image) versus the distracter was significantly lower when the target word was mispronounced, indicating sensitivity to phonological deviation. Performance patterns varied by age group. The youngest group (6-8 months, n=30) was at chance in both conditions, the middle group (9-11 months, n=21) showed significant recognition of correct pronunciations and a marginal mispronunciation effect, and the oldest age group (12-14 months, n=27) demonstrated the mature pattern: significant recognition and a significant mispronunciation effect. Ongoing work is completing the pre-registered sample size.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"45 ","pages":"792-798"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10487160/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10239598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Victoria L Jacoby, Christine M Massey, Everett Mettler, Philip J Kellman
Recent work suggests that learning perceptual classifications can be enhanced by combining single item classifications with adaptive comparisons triggered by each learner's confusions. Here, we asked whether learning might work equally well using all comparison trials. In a face identification paradigm, we tested single item classifications, paired comparisons, and dual instance classifications that resembled comparisons but required two identification responses. In initial results, the comparisons condition showed evidence of greater efficiency (learning gain divided by trials or time invested). We suspected that this effect may have been driven by easier attainment of mastery criteria in the comparisons condition, and a negatively accelerated learning curve. To test this idea, we fit learning curves and found data consistent with the same underlying learning rate in all conditions. These results suggest that paired comparison trials may be as effective in driving learning of multiple perceptual classifications as more demanding single item classifications.
{"title":"Comparisons in Adaptive Perceptual Category Learning.","authors":"Victoria L Jacoby, Christine M Massey, Everett Mettler, Philip J Kellman","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Recent work suggests that learning perceptual classifications can be enhanced by combining single item classifications with adaptive comparisons triggered by each learner's confusions. Here, we asked whether learning might work equally well using <i>all</i> comparison trials. In a face identification paradigm, we tested single item classifications, paired comparisons, and dual instance classifications that resembled comparisons but required two identification responses. In initial results, the comparisons condition showed evidence of greater efficiency (learning gain divided by trials or time invested). We suspected that this effect may have been driven by easier attainment of mastery criteria in the comparisons condition, and a negatively accelerated learning curve. To test this idea, we fit learning curves and found data consistent with the same underlying learning rate in all conditions. These results suggest that paired comparison trials may be as effective in driving learning of multiple perceptual classifications as more demanding single item classifications.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"44 ","pages":"2372-2378"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10316997/pdf/nihms-1912132.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9798681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jacob Russin, Maryam Zolfaghar, Seongmin A Park, Erie Boorman, Randall C O'Reilly
Neural networks struggle in continual learning settings from catastrophic forgetting: when trials are blocked, new learning can overwrite the learning from previous blocks. Humans learn effectively in these settings, in some cases even showing an advantage of blocking, suggesting the brain contains mechanisms to overcome this problem. Here, we build on previous work and show that neural networks equipped with a mechanism for cognitive control do not exhibit catastrophic forgetting when trials are blocked. We further show an advantage of blocking over interleaving when there is a bias for active maintenance in the control signal, implying a tradeoff between maintenance and the strength of control. Analyses of map-like representations learned by the networks provided additional insights into these mechanisms. Our work highlights the potential of cognitive control to aid continual learning in neural networks, and offers an explanation for the advantage of blocking that has been observed in humans.
{"title":"A Neural Network Model of Continual Learning with Cognitive Control.","authors":"Jacob Russin, Maryam Zolfaghar, Seongmin A Park, Erie Boorman, Randall C O'Reilly","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Neural networks struggle in continual learning settings from catastrophic forgetting: when trials are blocked, new learning can overwrite the learning from previous blocks. Humans learn effectively in these settings, in some cases even showing an advantage of blocking, suggesting the brain contains mechanisms to overcome this problem. Here, we build on previous work and show that neural networks equipped with a mechanism for cognitive control do not exhibit catastrophic forgetting when trials are blocked. We further show an advantage of blocking over interleaving when there is a bias for active maintenance in the control signal, implying a tradeoff between maintenance and the strength of control. Analyses of map-like representations learned by the networks provided additional insights into these mechanisms. Our work highlights the potential of cognitive control to aid continual learning in neural networks, and offers an explanation for the advantage of blocking that has been observed in humans.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"44 ","pages":"1064-1071"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10205096/pdf/nihms-1851069.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9522336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jing-Jing Li, Liyu Xia, Flora Dong, Anne G E Collins
Humans have the exceptional ability to efficiently structure past knowledge during learning to enable fast generalization. Xia and Collins (2021) evaluated this ability in a hierarchically structured, sequential decision-making task, where participants could build "options" (strategy "chunks") at multiple levels of temporal and state abstraction. A quantitative model, the Option Model, captured the transfer effects observed in human participants, suggesting that humans create and compose hierarchical options and use them to explore novel contexts. However, it is not well understood how learning in a new context is attributed to new and old options (i.e., the credit assignment problem). In a new context with new contingencies, where participants can recompose some aspects of previously learned options, do they reliably create new options or overwrite existing ones? Does the credit assignment depend on how similar the new option is to an old one? In our experiment, two groups of participants (n=124 and n=104) learned hierarchically structured options, experienced different amounts of negative transfer in a new option context, and were subsequently tested on the previously learned options. Behavioral analysis showed that old options were successfully reused without interference, and new options were appropriately created and credited. This credit assignment did not depend on how similar the new option was to the old option, showing great flexibility and precision in human hierarchical learning. These behavioral results were captured by the Option Model, providing further evidence for option learning and transfer in humans.
{"title":"Credit assignment in hierarchical option transfer.","authors":"Jing-Jing Li, Liyu Xia, Flora Dong, Anne G E Collins","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Humans have the exceptional ability to efficiently structure past knowledge during learning to enable fast generalization. Xia and Collins (2021) evaluated this ability in a hierarchically structured, sequential decision-making task, where participants could build \"options\" (strategy \"chunks\") at multiple levels of temporal and state abstraction. A quantitative model, the Option Model, captured the transfer effects observed in human participants, suggesting that humans create and compose hierarchical options and use them to explore novel contexts. However, it is not well understood how learning in a new context is attributed to new and old options (i.e., the credit assignment problem). In a new context with new contingencies, where participants can recompose some aspects of previously learned options, do they reliably create new options or overwrite existing ones? Does the credit assignment depend on how similar the new option is to an old one? In our experiment, two groups of participants (n=124 and n=104) learned hierarchically structured options, experienced different amounts of negative transfer in a new option context, and were subsequently tested on the previously learned options. Behavioral analysis showed that old options were successfully reused without interference, and new options were appropriately created and credited. This credit assignment did not depend on how similar the new option was to the old option, showing great flexibility and precision in human hierarchical learning. These behavioral results were captured by the Option Model, providing further evidence for option learning and transfer in humans.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"44 ","pages":"948-954"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9751259/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10750796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jacob Russin, Maryam Zolfaghar, Seongmin A Park, Erie Boorman, Randall C O'Reilly
The neural mechanisms supporting flexible relational inferences, especially in novel situations, are a major focus of current research. In the complementary learning systems framework, pattern separation in the hippocampus allows rapid learning in novel environments, while slower learning in neocortex accumulates small weight changes to extract systematic structure from well-learned environments. In this work, we adapt this framework to a task from a recent fMRI experiment where novel transitive inferences must be made according to implicit relational structure. We show that computational models capturing the basic cognitive properties of these two systems can explain relational transitive inferences in both familiar and novel environments, and reproduce key phenomena observed in the fMRI experiment.
{"title":"Complementary Structure-Learning Neural Networks for Relational Reasoning.","authors":"Jacob Russin, Maryam Zolfaghar, Seongmin A Park, Erie Boorman, Randall C O'Reilly","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The neural mechanisms supporting flexible relational inferences, especially in novel situations, are a major focus of current research. In the complementary learning systems framework, pattern separation in the hippocampus allows rapid learning in novel environments, while slower learning in neocortex accumulates small weight changes to extract systematic structure from well-learned environments. In this work, we adapt this framework to a task from a recent fMRI experiment where novel transitive inferences must be made according to implicit relational structure. We show that computational models capturing the basic cognitive properties of these two systems can explain relational transitive inferences in both familiar and novel environments, and reproduce key phenomena observed in the fMRI experiment.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"2021 ","pages":"1560-1566"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8491570/pdf/nihms-1741694.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39492447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Humans have the astonishing capacity to quickly adapt to varying environmental demands and reach complex goals in the absence of extrinsic rewards. Part of what underlies this capacity is the ability to flexibly reuse and recombine previous experiences, and to plan future courses of action in a psychological space that is shaped by these experiences. Decades of research have suggested that humans use hierarchical representations for efficient planning and flexibility, but the origin of these representations has remained elusive. This study investigates how 73 participants learned hierarchical representations through experience, in a task in which they had to perform complex action sequences to obtain rewards. Complex action sequences were composed of simpler action sequences, which were not rewarded, but whose completion was signaled to participants. We investigated the process with which participants learned to perform simpler action sequences and combined them into complex action sequences. After learning action sequences, participants completed a transfer phase in which either simple sequences or complex sequences were manipulated without notice. Relearning progressed slower when simple than complex sequences were changed, in accordance with a hierarchical representations in which lower levels are quickly consolidated, potentially stabilizing exploration, while higher levels remain malleable, with benefits for flexible recombination.
{"title":"How the Mind Creates Structure: Hierarchical Learning of Action Sequences.","authors":"Maria K Eckstein, Anne G E Collins","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Humans have the astonishing capacity to quickly adapt to varying environmental demands and reach complex goals in the absence of extrinsic rewards. Part of what underlies this capacity is the ability to flexibly reuse and recombine previous experiences, and to plan future courses of action in a psychological space that is shaped by these experiences. Decades of research have suggested that humans use hierarchical representations for efficient planning and flexibility, but the origin of these representations has remained elusive. This study investigates how 73 participants learned hierarchical representations through experience, in a task in which they had to perform complex action sequences to obtain rewards. Complex action sequences were composed of simpler action sequences, which were not rewarded, but whose completion was signaled to participants. We investigated the process with which participants learned to perform simpler action sequences and combined them into complex action sequences. After learning action sequences, participants completed a transfer phase in which either simple sequences or complex sequences were manipulated without notice. Relearning progressed slower when simple than complex sequences were changed, in accordance with a hierarchical representations in which lower levels are quickly consolidated, potentially stabilizing exploration, while higher levels remain malleable, with benefits for flexible recombination.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":"43 ","pages":"618-624"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8711273/pdf/nihms-1764449.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39631756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Perceiving 3D structure in natural images is an immense computational challenge for the visual system. While many previous studies focused on the perception of rigid 3D objects, we applied a novel method on a common set of non-rigid objects-static images of the human body in the natural world. We investigated to what extent human ability to interpret 3D poses in natural images depends on the typicality of the underlying 3D pose and the informativeness of the viewpoint. Using a novel 2AFC pose matching task, we measured how well subjects were able to match a target natural pose image with one of two comparison, synthetic body images from a different viewpoint-one was rendered with the same 3D pose parameters as the target while the other was a distractor rendered with added noises on joint angles. We found that performance for typical poses was measurably better than atypical poses; however, we found no significant difference between informative and less informative viewpoints. Further comparisons of 2D and 3D pose matching models on the same task showed that 3D body knowledge is particularly important when interpreting images of atypical poses. These results suggested that human ability to interpret 3D poses depends on pose typicality but not viewpoint informativeness, and that humans probably use prior knowledge of 3D pose structures.
{"title":"Three-dimensional pose discrimination in natural images of humans.","authors":"Hongru Zhu, Alan Yuille, Daniel Kersten","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Perceiving 3D structure in natural images is an immense computational challenge for the visual system. While many previous studies focused on the perception of rigid 3D objects, we applied a novel method on a common set of non-rigid objects-static images of the human body in the natural world. We investigated to what extent human ability to interpret 3D poses in natural images depends on the typicality of the underlying 3D pose and the informativeness of the viewpoint. Using a novel 2AFC pose matching task, we measured how well subjects were able to match a target natural pose image with one of two comparison, synthetic body images from a different viewpoint-one was rendered with the same 3D pose parameters as the target while the other was a distractor rendered with added noises on joint angles. We found that performance for typical poses was measurably better than atypical poses; however, we found no significant difference between informative and less informative viewpoints. Further comparisons of 2D and 3D pose matching models on the same task showed that 3D body knowledge is particularly important when interpreting images of atypical poses. These results suggested that human ability to interpret 3D poses depends on pose typicality but not viewpoint informativeness, and that humans probably use prior knowledge of 3D pose structures.</p>","PeriodicalId":72634,"journal":{"name":"CogSci ... Annual Conference of the Cognitive Science Society. Cognitive Science Society (U.S.). Conference","volume":" ","pages":"223-229"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9374112/pdf/nihms-1814947.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40700458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}