Ramscar, Yarlett, Dye, Denny, and Thorpe (2010) showed how, consistent with the predictions of error-driven learning models, the order in which stimuli are presented in training can affect category learning. Specifically, learners exposed to artificial language input where objects preceded their labels learned the discriminating features of categories better than learners exposed to input where labels preceded objects. We sought to replicate this finding in two online experiments employing the same tests used originally: A four pictures test (match a label to one of four pictures) and a four labels test (match a picture to one of four labels). In our study, only findings from the four pictures test were consistent with the original result. Additionally, the effect sizes observed were smaller, and participants over-generalized high-frequency category labels more than in the original study. We suggest that although Ramscar, Yarlett, Dye, Denny, and Thorpe (2010) feature-label order predictions were derived from error-driven learning, they failed to consider that this mechanism also predicts that performance in any training paradigm must inevitably be influenced by participant prior experience. We consider our findings in light of these factors, and discuss implications for the generalizability and replication of training studies.
{"title":"The Effects of Linear Order in Category Learning: Some Replications of Ramscar et al. (2010) and Their Implications for Replicating Training Studies","authors":"Eva Viviani, Michael Ramscar, Elizabeth Wonnacott","doi":"10.1111/cogs.13445","DOIUrl":"10.1111/cogs.13445","url":null,"abstract":"<p>Ramscar, Yarlett, Dye, Denny, and Thorpe (2010) showed how, consistent with the predictions of error-driven learning models, the order in which stimuli are presented in training can affect category learning. Specifically, learners exposed to artificial language input where objects preceded their labels learned the discriminating features of categories better than learners exposed to input where labels preceded objects. We sought to replicate this finding in two online experiments employing the same tests used originally: A four pictures test (match a label to one of four pictures) and a four labels test (match a picture to one of four labels). In our study, only findings from the four pictures test were consistent with the original result. Additionally, the effect sizes observed were smaller, and participants over-generalized high-frequency category labels more than in the original study. We suggest that although Ramscar, Yarlett, Dye, Denny, and Thorpe (2010) feature-label order predictions were derived from error-driven learning, they failed to consider that this mechanism also predicts that performance in any training paradigm must inevitably be influenced by participant prior experience. We consider our findings in light of these factors, and discuss implications for the generalizability and replication of training studies.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13445","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sahil Luthra, Anne Marie Crinnion, David Saltzman, James S. Magnuson
We recently reported strong, replicable (i.e., replicated) evidence for lexically mediated compensation for coarticulation (LCfC; Luthra et al., 2021), whereby lexical knowledge influences a prelexical process. Critically, evidence for LCfC provides robust support for interactive models of cognition that include top-down feedback and is inconsistent with autonomous models that allow only feedforward processing. McQueen, Jesse, and Mitterer (2023) offer five counter-arguments against our interpretation; we respond to each of those arguments here and conclude that top-down feedback provides the most parsimonious explanation of extant data.
我们最近报告了词汇介导的共同发音补偿(LCfC;Luthra et al.至关重要的是,LCfC 的证据为包含自上而下反馈的互动认知模型提供了强有力的支持,而与只允许前馈处理的自主模型并不一致。麦奎恩、杰西和米特勒(2023)针对我们的解释提出了五个反驳论点;我们在此一一回应,并得出结论:自上而下的反馈为现有数据提供了最合理的解释。
{"title":"Do They Know It's Christmash? Lexical Knowledge Directly Impacts Speech Perception","authors":"Sahil Luthra, Anne Marie Crinnion, David Saltzman, James S. Magnuson","doi":"10.1111/cogs.13449","DOIUrl":"10.1111/cogs.13449","url":null,"abstract":"<p>We recently reported strong, replicable (i.e., replicated) evidence for <i>lexically mediated compensation for coarticulation</i> (LCfC; Luthra et al., 2021), whereby lexical knowledge influences a prelexical process. Critically, evidence for LCfC provides robust support for <i>interactive</i> models of cognition that include <i>top-down feedback</i> and is inconsistent with <i>autonomous</i> models that allow only feedforward processing. McQueen, Jesse, and Mitterer (2023) offer five counter-arguments against our interpretation; we respond to each of those arguments here and conclude that top-down feedback provides the most parsimonious explanation of extant data.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141077180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John R. Anderson, Shawn Betts, Daniel Bothell, Cvetomir M. Dimov, Jon M. Fincham
Open-ended tasks can be decomposed into the three levels of Newell's Cognitive Band: the Unit-Task level, the Operation level, and the Deliberate-Act level. We analyzed the video game Co-op Space Fortress at these levels, reporting both the match of a cognitive model to subject behavior and the use of electroencephalogram (EEG) to track subject cognition. The Unit Task level in this game involves coordinating with a partner to kill a fortress. At this highest level of the Cognitive Band, there is a good match between subject behavior and the model. The EEG signals were also strong enough to track when Unit Tasks succeeded or failed. The intermediate Operation level in this task involves legs of flight to achieve a kill. The EEG signals associated with these operations are much weaker than the signals associated with the Unit Tasks. Still, it was possible to reconstruct subject play with much better than chance success. There were significant differences in the leg behavior of subjects and models. Model behavior did not provide a good basis for interpreting a subject's behavior at this level. At the lowest Deliberate-Act level, we observed overlapping key actions, which the model did not display. Such overlapping key actions also frustrated efforts to identify EEG signals of motor actions. We conclude that the Unit-task level is the appropriate level both for understanding open-ended tasks and for using EEG to track the performance of open-ended tasks.
{"title":"Tracking the Cognitive Band in an Open-Ended Task","authors":"John R. Anderson, Shawn Betts, Daniel Bothell, Cvetomir M. Dimov, Jon M. Fincham","doi":"10.1111/cogs.13454","DOIUrl":"10.1111/cogs.13454","url":null,"abstract":"<p>Open-ended tasks can be decomposed into the three levels of Newell's Cognitive Band: the Unit-Task level, the Operation level, and the Deliberate-Act level. We analyzed the video game Co-op Space Fortress at these levels, reporting both the match of a cognitive model to subject behavior and the use of electroencephalogram (EEG) to track subject cognition. The Unit Task level in this game involves coordinating with a partner to kill a fortress. At this highest level of the Cognitive Band, there is a good match between subject behavior and the model. The EEG signals were also strong enough to track when Unit Tasks succeeded or failed. The intermediate Operation level in this task involves legs of flight to achieve a kill. The EEG signals associated with these operations are much weaker than the signals associated with the Unit Tasks. Still, it was possible to reconstruct subject play with much better than chance success. There were significant differences in the leg behavior of subjects and models. Model behavior did not provide a good basis for interpreting a subject's behavior at this level. At the lowest Deliberate-Act level, we observed overlapping key actions, which the model did not display. Such overlapping key actions also frustrated efforts to identify EEG signals of motor actions. We conclude that the Unit-task level is the appropriate level both for understanding open-ended tasks and for using EEG to track the performance of open-ended tasks.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13454","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141077181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erdin Mujezinović, Vsevolod Kapatsinski, Ruben van de Vijver
A word often expresses many different morphological functions. Which part of a word contributes to which part of the overall meaning is not always clear, which raises the question as to how such functions are learned. While linguistic studies tacitly assume the co-occurrence of cues and outcomes to suffice in learning these functions (Baer-Henney, Kügler, & van de Vijver, 2015; Baer-Henney & van de Vijver, 2012), error-driven learning suggests that contingency rather than contiguity is crucial (Nixon, 2020; Ramscar, Yarlett, Dye, Denny, & Thorpe, 2010). In error-driven learning, cues gain association strength if they predict a certain outcome, and they lose strength if the outcome is absent. This reduction of association strength is called unlearning. So far, it is unclear if such unlearning has consequences for cue–outcome associations beyond the ones that get reduced. To test for such consequences of unlearning, we taught participants morphophonological patterns in an artificial language learning experiment. In one block, the cues to two morphological outcomes—plural and diminutive—co-occurred within the same word forms. In another block, a single cue to only one of these two outcomes was presented in a different set of word forms. We wanted to find out, if participants unlearn this cue's association with the outcome that is not predicted by the cue alone, and if this allows the absent cue to be associated with the absent outcome. Our results show that if unlearning was possible, participants learned that the absent cue predicts the absent outcome better than if no unlearning was possible. This effect was stronger if the unlearned cue was more salient. This shows that unlearning takes place even if no alternative cues to an absent outcome are provided, which highlights that learners take both positive and negative evidence into account—as predicted by domain general error-driven learning.
一个词往往表达许多不同的形态功能。一个词的哪个部分对整体意义的哪个部分起作用并不总是很清楚,这就提出了如何学习这些功能的问题。虽然语言学研究默认线索和结果的共同出现足以帮助学习这些功能(Baer-Henney, Kügler, & van de Vijver, 2015; Baer-Henney & van de Vijver, 2012),但错误驱动学习表明偶然性而非连续性才是关键(Nixon, 2020; Ramscar, Yarlett, Dye, Denny, & Thorpe, 2010)。在错误驱动学习中,如果线索能预测出某种结果,它们就会获得关联强度,而如果预测不到结果,它们就会失去关联强度。这种关联强度的降低被称为 "非学习"。到目前为止,还不清楚这种 "解除学习 "是否会对线索与结果之间的关联产生影响。为了测试这种解除学习的后果,我们在人工语言学习实验中向参与者教授了形态音素模式。在一个区块中,两个词形结果的线索--大写和小写--出现在相同的词形中。在另一个区块中,在一组不同的词形中只出现了这两种结果中的一种。我们想知道,如果被试没有学习到这一线索与结果之间的关联,而这一线索又不能单独预测结果,那么被试是否能将缺失的线索与缺失的结果联系起来。我们的研究结果表明,如果可以取消学习,那么与无法取消学习的情况相比,参与者能更好地学习到缺失线索对缺失结果的预测作用。如果未学习的线索更加突出,这种效果会更强。这表明,即使没有提供缺失结果的替代线索,学习者也会进行解除学习,这突出表明学习者会同时考虑积极和消极的证据--正如一般错误驱动学习领域所预测的那样。
{"title":"One Cue's Loss Is Another Cue's Gain—Learning Morphophonology Through Unlearning","authors":"Erdin Mujezinović, Vsevolod Kapatsinski, Ruben van de Vijver","doi":"10.1111/cogs.13450","DOIUrl":"10.1111/cogs.13450","url":null,"abstract":"<p>A word often expresses many different morphological functions. Which part of a word contributes to which part of the overall meaning is not always clear, which raises the question as to how such functions are learned. While linguistic studies tacitly assume the co-occurrence of cues and outcomes to suffice in learning these functions (Baer-Henney, Kügler, & van de Vijver, 2015; Baer-Henney & van de Vijver, 2012), error-driven learning suggests that contingency rather than contiguity is crucial (Nixon, 2020; Ramscar, Yarlett, Dye, Denny, & Thorpe, 2010). In error-driven learning, cues gain association strength if they predict a certain outcome, and they lose strength if the outcome is absent. This reduction of association strength is called unlearning. So far, it is unclear if such unlearning has consequences for cue–outcome associations beyond the ones that get reduced. To test for such consequences of unlearning, we taught participants morphophonological patterns in an artificial language learning experiment. In one block, the cues to two morphological outcomes—plural and diminutive—co-occurred within the same word forms. In another block, a single cue to only one of these two outcomes was presented in a different set of word forms. We wanted to find out, if participants unlearn this cue's association with the outcome that is not predicted by the cue alone, and if this allows the absent cue to be associated with the absent outcome. Our results show that if unlearning was possible, participants learned that the absent cue predicts the absent outcome better than if no unlearning was possible. This effect was stronger if the unlearned cue was more salient. This shows that unlearning takes place even if no alternative cues to an absent outcome are provided, which highlights that learners take both positive and negative evidence into account—as predicted by domain general error-driven learning.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13450","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140923539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interpreting a seemingly simple function word like “or,” “behind,” or “more” can require logical, numerical, and relational reasoning. How are such words learned by children? Prior acquisition theories have often relied on positing a foundation of innate knowledge. Yet recent neural-network-based visual question answering models apparently can learn to use function words as part of answering questions about complex visual scenes. In this paper, we study what these models learn about function words, in the hope of better understanding how the meanings of these words can be learned by both models and children. We show that recurrent models trained on visually grounded language learn gradient semantics for function words requiring spatial and numerical reasoning. Furthermore, we find that these models can learn the meanings of logical connectives and and or without any prior knowledge of logical reasoning as well as early evidence that they are sensitive to alternative expressions when interpreting language. Finally, we show that word learning difficulty is dependent on the frequency of models' input. Our findings offer proof-of-concept evidence that it is possible to learn the nuanced interpretations of function words in a visually grounded context by using non-symbolic general statistical learning algorithms, without any prior knowledge of linguistic meaning.
{"title":"Learning the Meanings of Function Words From Grounded Language Using a Visual Question Answering Model","authors":"Eva Portelance, Michael C. Frank, Dan Jurafsky","doi":"10.1111/cogs.13448","DOIUrl":"https://doi.org/10.1111/cogs.13448","url":null,"abstract":"<p>Interpreting a seemingly simple function word like “or,” “behind,” or “more” can require logical, numerical, and relational reasoning. How are such words learned by children? Prior acquisition theories have often relied on positing a foundation of innate knowledge. Yet recent neural-network-based visual question answering models apparently can learn to use function words as part of answering questions about complex visual scenes. In this paper, we study what these models learn about function words, in the hope of better understanding how the meanings of these words can be learned by both models and children. We show that recurrent models trained on visually grounded language learn gradient semantics for function words requiring spatial and numerical reasoning. Furthermore, we find that these models can learn the meanings of logical connectives <i>and</i> and <i>or</i> without any prior knowledge of logical reasoning as well as early evidence that they are sensitive to alternative expressions when interpreting language. Finally, we show that word learning difficulty is dependent on the frequency of models' input. Our findings offer proof-of-concept evidence that it is possible to learn the nuanced interpretations of function words in a visually grounded context by using non-symbolic general statistical learning algorithms, without any prior knowledge of linguistic meaning.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13448","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140919246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anxiety shifts visual attention and perceptual mechanisms, preparing oneself to detect potentially threatening information more rapidly. Despite being demonstrated for threat-related social stimuli, such as fearful expressions, it remains unexplored if these effects encompass other social cues of danger, such as aggressive gestures/actions. To this end, we recruited a total of 65 participants and asked them to identify, as quickly and accurately as possible, potentially aggressive actions depicted by an agent. By introducing and manipulating the occurrence of electric shocks, we induced safe and threatening conditions. In addition, the association between electric shocks and aggression was also manipulated. Our result showed that participants have improved sensitivity, with no changes to criterion, when detecting aggressive gestures during threat compared to safe conditions. Furthermore, drift diffusion model analysis showed that under threat participants exhibited faster evidence accumulation toward the correct perceptual decision. Lastly, the relationship between threat source and aggression appeared to not impact any of the effects described above. Overall, our results indicate that the benefits gained from states of anxiety, such as increased sensitivity toward threat and greater evidence accumulation, are transposable to social stimuli capable of signaling danger other than facial expressions.
{"title":"Improved Perception of Aggression Under (un)Related Threat of Shock","authors":"Fábio Silva, Marta I. Garrido, Sandra C. Soares","doi":"10.1111/cogs.13451","DOIUrl":"10.1111/cogs.13451","url":null,"abstract":"<p>Anxiety shifts visual attention and perceptual mechanisms, preparing oneself to detect potentially threatening information more rapidly. Despite being demonstrated for threat-related social stimuli, such as fearful expressions, it remains unexplored if these effects encompass other social cues of danger, such as aggressive gestures/actions. To this end, we recruited a total of 65 participants and asked them to identify, as quickly and accurately as possible, potentially aggressive actions depicted by an agent. By introducing and manipulating the occurrence of electric shocks, we induced safe and threatening conditions. In addition, the association between electric shocks and aggression was also manipulated. Our result showed that participants have improved sensitivity, with no changes to criterion, when detecting aggressive gestures during threat compared to safe conditions. Furthermore, drift diffusion model analysis showed that under threat participants exhibited faster evidence accumulation toward the correct perceptual decision. Lastly, the relationship between threat source and aggression appeared to not impact any of the effects described above. Overall, our results indicate that the benefits gained from states of anxiety, such as increased sensitivity toward threat and greater evidence accumulation, are transposable to social stimuli capable of signaling danger other than facial expressions.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13451","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140915751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Slower perceptual alternations, a notable perceptual effect observed in psychiatric disorders, can be alleviated by antidepressant therapies that affect serotonin levels in the brain. While these phenomena have been well documented, the underlying neurocognitive mechanisms remain to be elucidated. Our study bridges this gap by employing a computational cognitive approach within a Bayesian predictive coding framework to explore these mechanisms in depression. We fitted a prediction error (PE) model to behavioral data from a binocular rivalry task, uncovering that significantly higher initial prior precision and lower PE led to a slower switch rate in patients with depression. Furthermore, serotonin-targeting antidepressant treatments significantly decreased the prior precision and increased PE, both of which were predictive of improvements in the perceptual alternation rate of depression patients. These findings indicated that the substantially slower perception switch rate in patients with depression was caused by the greater reliance on top-down priors and that serotonin treatment's efficacy was in its recalibration of these priors and enhancement of PE. Our study not only elucidates the cognitive underpinnings of depression, but also suggests computational modeling as a potent tool for integrating cognitive science with clinical psychology, advancing our understanding and treatment of cognitive impairments in depression.
{"title":"Reviving Bistable Perception in Patients With Depression by Decreasing the Overestimation of Prior Precision","authors":"Wenbo Wang, Changbo Zhu, Ting Jia, Meidan Zu, Yandong Tang, Liqin Zhou, Yanghua Tian, Bailu Si, Ke Zhou","doi":"10.1111/cogs.13452","DOIUrl":"10.1111/cogs.13452","url":null,"abstract":"<p>Slower perceptual alternations, a notable perceptual effect observed in psychiatric disorders, can be alleviated by antidepressant therapies that affect serotonin levels in the brain. While these phenomena have been well documented, the underlying neurocognitive mechanisms remain to be elucidated. Our study bridges this gap by employing a computational cognitive approach within a Bayesian predictive coding framework to explore these mechanisms in depression. We fitted a prediction error (PE) model to behavioral data from a binocular rivalry task, uncovering that significantly higher initial prior precision and lower PE led to a slower switch rate in patients with depression. Furthermore, serotonin-targeting antidepressant treatments significantly decreased the prior precision and increased PE, both of which were predictive of improvements in the perceptual alternation rate of depression patients. These findings indicated that the substantially slower perception switch rate in patients with depression was caused by the greater reliance on top-down priors and that serotonin treatment's efficacy was in its recalibration of these priors and enhancement of PE. Our study not only elucidates the cognitive underpinnings of depression, but also suggests computational modeling as a potent tool for integrating cognitive science with clinical psychology, advancing our understanding and treatment of cognitive impairments in depression.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140915731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
“Autonomous Sensory Meridian Response” (ASMR) refers to a sensory-emotional experience that was first explicitly identified and named within the past two decades in online discussion boards. Since then, there has been mounting psychological and neural evidence of a clustering of properties common to the phenomenon of ASMR, including convergence on the set of stimuli that trigger the experience, the properties of the experience itself, and its downstream effects. Moreover, psychological instruments have begun to be developed and employed in an attempt to measure it. Based on this empirical work, we make the case that despite its nonscientific origins, ASMR is a good candidate for being a real kind in the cognitive sciences. The phenomenon appears to have a robust causal profile and may also have an adaptive evolutionary history. We also argue that a more thorough understanding of the distinctive type of phenomenal experience involved in an ASMR episode can shed light on the functions of consciousness, and ultimately undermine certain “cognitive” theories of consciousness. We conclude that ASMR should be the subject of more extensive scientific investigation, particularly since it may also have the potential for therapeutic applications.
{"title":"Autonomous Sensory Meridian Response (ASMR) and the Functions of Consciousness","authors":"Dylan Ludwig, Muhammad Ali Khalidi","doi":"10.1111/cogs.13453","DOIUrl":"10.1111/cogs.13453","url":null,"abstract":"<p>“Autonomous Sensory Meridian Response” (ASMR) refers to a sensory-emotional experience that was first explicitly identified and named within the past two decades in online discussion boards. Since then, there has been mounting psychological and neural evidence of a clustering of properties common to the phenomenon of ASMR, including convergence on the set of stimuli that trigger the experience, the properties of the experience itself, and its downstream effects. Moreover, psychological instruments have begun to be developed and employed in an attempt to measure it. Based on this empirical work, we make the case that despite its nonscientific origins, ASMR is a good candidate for being a real kind in the cognitive sciences. The phenomenon appears to have a robust causal profile and may also have an adaptive evolutionary history. We also argue that a more thorough understanding of the distinctive type of phenomenal experience involved in an ASMR episode can shed light on the functions of consciousness, and ultimately undermine certain “cognitive” theories of consciousness. We conclude that ASMR should be the subject of more extensive scientific investigation, particularly since it may also have the potential for therapeutic applications.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13453","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140915723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stefan Depeweg, Contantin A. Rothkopf, Frank Jäkel
More than 50 years ago, Bongard introduced 100 visual concept learning problems as a challenge for artificial vision systems. These problems are now known as Bongard problems. Although they are well known in cognitive science and artificial intelligence, only very little progress has been made toward building systems that can solve a substantial subset of them. In the system presented here, visual features are extracted through image processing and then translated into a symbolic visual vocabulary. We introduce a formal language that allows representing compositional visual concepts based on this vocabulary. Using this language and Bayesian inference, concepts can be induced from the examples that are provided in each problem. We find a reasonable agreement between the concepts with high posterior probability and the solutions formulated by Bongard himself for a subset of 35 problems. While this approach is far from solving Bongard problems like humans, it does considerably better than previous approaches. We discuss the issues we encountered while developing this system and their continuing relevance for understanding visual cognition. For instance, contrary to other concept learning problems, the examples are not random in Bongard problems; instead they are carefully chosen to ensure that the concept can be induced, and we found it helpful to take the resulting pragmatic constraints into account.
{"title":"Solving Bongard Problems With a Visual Language and Pragmatic Constraints","authors":"Stefan Depeweg, Contantin A. Rothkopf, Frank Jäkel","doi":"10.1111/cogs.13432","DOIUrl":"https://doi.org/10.1111/cogs.13432","url":null,"abstract":"<p>More than 50 years ago, Bongard introduced 100 visual concept learning problems as a challenge for artificial vision systems. These problems are now known as Bongard problems. Although they are well known in cognitive science and artificial intelligence, only very little progress has been made toward building systems that can solve a substantial subset of them. In the system presented here, visual features are extracted through image processing and then translated into a symbolic visual vocabulary. We introduce a formal language that allows representing compositional visual concepts based on this vocabulary. Using this language and Bayesian inference, concepts can be induced from the examples that are provided in each problem. We find a reasonable agreement between the concepts with high posterior probability and the solutions formulated by Bongard himself for a subset of 35 problems. While this approach is far from solving Bongard problems like humans, it does considerably better than previous approaches. We discuss the issues we encountered while developing this system and their continuing relevance for understanding visual cognition. For instance, contrary to other concept learning problems, the examples are not random in Bongard problems; instead they are carefully chosen to ensure that the concept can be induced, and we found it helpful to take the resulting pragmatic constraints into account.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13432","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140820581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ishanti Gangopadhyay, Daniel Fulford, Kathleen Corriveau, Jessica Mow, Pearl Han Li, Sudha Arunachalam
Understanding cognitive effort expended during assessments is essential to improving efficiency, accuracy, and accessibility within these assessments. Pupil dilation is commonly used as a psychophysiological measure of cognitive effort, yet research on its relationship with effort expended specifically during language processing is limited. The present study adds to and expands on this literature by investigating the relationships among pupil dilation, trial difficulty, and accuracy during a vocabulary test. Participants (n = 63, Mage = 19.25) completed a subset of trials from the Peabody Picture Vocabulary Test while seated at an eye-tracker monitor. During each trial, four colored images were presented on the monitor while a word was presented via audio recording. Participants verbally indicated which image they thought represented the target word. Words were categorized into Easy, Medium, and Hard difficulty. Pupil dilation during the Medium and Hard trials was significantly greater than during the Easy trials, though the Medium and Hard trials did not significantly differ from each other. Pupil dilation in comparison to trial accuracy presented a more complex pattern, with comparisons between accurate and inaccurate trials differing depending on the timing of the stimulus presentation. These results present further evidence that pupil dilation increases with cognitive effort associated with vocabulary tests, providing insights that could help refine vocabulary assessments and other related tests of language processing.
{"title":"Pupils Dilate More to Harder Vocabulary Words than Easier Ones","authors":"Ishanti Gangopadhyay, Daniel Fulford, Kathleen Corriveau, Jessica Mow, Pearl Han Li, Sudha Arunachalam","doi":"10.1111/cogs.13446","DOIUrl":"https://doi.org/10.1111/cogs.13446","url":null,"abstract":"<p>Understanding cognitive effort expended during assessments is essential to improving efficiency, accuracy, and accessibility within these assessments. Pupil dilation is commonly used as a psychophysiological measure of cognitive effort, yet research on its relationship with effort expended specifically during language processing is limited. The present study adds to and expands on this literature by investigating the relationships among pupil dilation, trial difficulty, and accuracy during a vocabulary test. Participants (<i>n</i> = 63, <i>M<sub>age</sub></i> = 19.25) completed a subset of trials from the Peabody Picture Vocabulary Test while seated at an eye-tracker monitor. During each trial, four colored images were presented on the monitor while a word was presented via audio recording. Participants verbally indicated which image they thought represented the target word. Words were categorized into Easy, Medium, and Hard difficulty. Pupil dilation during the Medium and Hard trials was significantly greater than during the Easy trials, though the Medium and Hard trials did not significantly differ from each other. Pupil dilation in comparison to trial accuracy presented a more complex pattern, with comparisons between accurate and inaccurate trials differing depending on the timing of the stimulus presentation. These results present further evidence that pupil dilation increases with cognitive effort associated with vocabulary tests, providing insights that could help refine vocabulary assessments and other related tests of language processing.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140639506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}