Humans and nonhuman animals learn to perform actions by associating actions with outcomes. In everyday life, outcomes sometimes occur only after a delay, and at an unexpected moment. The ability to connect actions and delayed outcomes has received less attention than performance in tasks where rewards follow the most recent action. Here, following a previous study (Sato et al. 2023), we designed a learning task to investigate humans' ability to link actions and outcomes which occurred after intervening choices. We prepared a total of six visual stimuli for use in three types of trials: A vs B, where choosing A immediately led to reward and choosing B was never rewarded, C vs D, where neither choice was immediately rewarded but choice of C led to reward in a later E vs F trial, and E vs F, where neither stimulus was associated with reward but a reward was given based on choice of C in the past. Results showed that nine individuals learned to choose C, thereby receiving a delayed reward. Among them, one participant subsequently correctly described the task structure in words, while the remaining eight did so with misunderstandings. We also observed large individual differences in participants' action selection (e.g., an irrational bias for D, a possible superstitious bias for either E or F) and explicit/implicit understanding of the link between action and delayed outcome expressed in words. Our results offer new insights into the ability to cognitively link actions and outcomes following a time lag.
人类和非人类动物通过将行为与结果联系起来来学习执行行为。在日常生活中,结果有时只是在延迟之后,在一个意想不到的时刻才出现。将行动和延迟的结果联系起来的能力受到的关注比在任务中的表现要少,在任务中,奖励会跟随最近的行动。在此,根据之前的研究(Sato et al. 2023),我们设计了一个学习任务来调查人类在干预选择后将行为和结果联系起来的能力。我们总共准备了六种视觉刺激,用于三种类型的试验:a对B,选择a立即获得奖励,选择B从未获得奖励;C对D,两种选择都没有立即获得奖励,但在随后的E对F试验中选择C会获得奖励;E对F,两种刺激都与奖励无关,但奖励是基于过去选择C而给予的。结果显示,9个人学会了选择C,从而获得了延迟奖励。其中,一名参与者随后用语言正确地描述了任务结构,而其余八名参与者则用误解的方式描述了任务结构。我们还观察到参与者在行动选择上的巨大个体差异(例如,对D的非理性偏见,对E或F的可能迷信偏见)和对行动与语言表达的延迟结果之间联系的显性/隐性理解。我们的研究结果为在一段时间后将行为和结果联系起来的认知能力提供了新的见解。
{"title":"A State-Transition-Free Delayed-Feedback Task Elicits Heterogeneous Human Responses.","authors":"Satoshi Hirata, Yutaro Sato, Hika Kuroshima, Yutaka Sakai","doi":"10.5334/joc.453","DOIUrl":"10.5334/joc.453","url":null,"abstract":"<p><p>Humans and nonhuman animals learn to perform actions by associating actions with outcomes. In everyday life, outcomes sometimes occur only after a delay, and at an unexpected moment. The ability to connect actions and delayed outcomes has received less attention than performance in tasks where rewards follow the most recent action. Here, following a previous study (Sato et al. 2023), we designed a learning task to investigate humans' ability to link actions and outcomes which occurred after intervening choices. We prepared a total of six visual stimuli for use in three types of trials: A vs B, where choosing A immediately led to reward and choosing B was never rewarded, C vs D, where neither choice was immediately rewarded but choice of C led to reward in a later E vs F trial, and E vs F, where neither stimulus was associated with reward but a reward was given based on choice of C in the past. Results showed that nine individuals learned to choose C, thereby receiving a delayed reward. Among them, one participant subsequently correctly described the task structure in words, while the remaining eight did so with misunderstandings. We also observed large individual differences in participants' action selection (e.g., an irrational bias for D, a possible superstitious bias for either E or F) and explicit/implicit understanding of the link between action and delayed outcome expressed in words. Our results offer new insights into the ability to cognitively link actions and outcomes following a time lag.</p>","PeriodicalId":32728,"journal":{"name":"Journal of Cognition","volume":"8 1","pages":"39"},"PeriodicalIF":0.0,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12273688/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144675939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-02eCollection Date: 2025-01-01DOI: 10.5334/joc.449
Ruhi Bhanap, Lea M Bartsch, Agnes Rosner
Memory strategies such as visual imagery and rehearsal are widely reported by participants as means to enhance recall. Their underlying mechanisms are thought to differ. Visual imagery is believed to engage both visual and spatial aspects of memoranda, while rehearsal is thought to reactivate only the item-specific information, excluding spatial information. In this study, we employed the Looking at Nothing (LAN) effect - in which individuals make eye movements towards the original location of the memorized item during retrieval - to investigate the reactivation of spatial location in both visual imagery and rehearsal. Our findings demonstrate that LAN occurs with both strategies, indicating that spatial information is reactivated during rehearsal as well. Notably, we observed higher immediate as well as delayed memory performance with visual imagery compared to rehearsal. However, the amount of LAN observed for both these strategies remained the same. To further explore whether these differences in the amount of LAN and memory performance were driven by a modulation of the strength of long-term memory (LTM) traces we introduced proactive interference (PI) in a second experiment. PI is known to impact LTM traces, while leaving working memory (WM) intact. While PI led to a decline in WM for visual imagery, the amount of LAN remained the same. These results indicate that visual imagery and rehearsal both reactivate location information and additionally, visual imagery drives eye movements and memory benefits through distinct mechanisms.
{"title":"Tracking Reactivation of Location Information during Memory Strategies: Insights from Eye Movements.","authors":"Ruhi Bhanap, Lea M Bartsch, Agnes Rosner","doi":"10.5334/joc.449","DOIUrl":"10.5334/joc.449","url":null,"abstract":"<p><p>Memory strategies such as visual imagery and rehearsal are widely reported by participants as means to enhance recall. Their underlying mechanisms are thought to differ. Visual imagery is believed to engage both visual and spatial aspects of memoranda, while rehearsal is thought to reactivate only the item-specific information, excluding spatial information. In this study, we employed the Looking at Nothing (LAN) effect - in which individuals make eye movements towards the original location of the memorized item during retrieval - to investigate the reactivation of spatial location in both visual imagery and rehearsal. Our findings demonstrate that LAN occurs with both strategies, indicating that spatial information is reactivated during rehearsal as well. Notably, we observed higher immediate as well as delayed memory performance with visual imagery compared to rehearsal. However, the amount of LAN observed for both these strategies remained the same. To further explore whether these differences in the amount of LAN and memory performance were driven by a modulation of the strength of long-term memory (LTM) traces we introduced proactive interference (PI) in a second experiment. PI is known to impact LTM traces, while leaving working memory (WM) intact. While PI led to a decline in WM for visual imagery, the amount of LAN remained the same. These results indicate that visual imagery and rehearsal both reactivate location information and additionally, visual imagery drives eye movements and memory benefits through distinct mechanisms.</p>","PeriodicalId":32728,"journal":{"name":"Journal of Cognition","volume":"8 1","pages":"38"},"PeriodicalIF":0.0,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12227092/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144576478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-28eCollection Date: 2025-01-01DOI: 10.5334/joc.448
Katharina Kühne, Alex Miklashevsky, Anastasia Malyshevskaya
The space-time congruency effect indicates faster processing of past-/future-related words with the left/right response key, suggesting the presence of the horizontal Mental Time Line (MTL). Typically, this effect is observed in the tasks with high temporal demand (i.e., past versus future categorization), but not in those with the low relevance of the time dimension (i.e., sensicality judgments). However, it remains unclear whether intermediate levels of temporal demand are sufficient to activate the MTL. To address this, we conducted three experiments in which participants categorized the same set of temporal words based on their relation to living entities (Experiment 1), space (Experiment 2), and general time (Experiment 3). In individual analyses of the experiments, the space-time congruency effect was absent in Experiment 1. In Experiment 2, the effect emerged in reaction times but not in accuracy. In Experiment 3, it was observed in both measures. Subsequent comparisons across experiments suggested reliable differences between Experiments 2 and 3 in reaction times and between Experiment 3 and the other two experiments in accuracy. Our results provide evidence that MTL activation depends on the level of temporal demand required by the task. The findings support the notion that mental representations are context-sensitive rather than fixed.
{"title":"Does the Level of Temporal Demand Affect Activation of the Mental Timeline?","authors":"Katharina Kühne, Alex Miklashevsky, Anastasia Malyshevskaya","doi":"10.5334/joc.448","DOIUrl":"10.5334/joc.448","url":null,"abstract":"<p><p>The space-time congruency effect indicates faster processing of past-/future-related words with the left/right response key, suggesting the presence of the horizontal Mental Time Line (MTL). Typically, this effect is observed in the tasks with high temporal demand (i.e., past versus future categorization), but not in those with the low relevance of the time dimension (i.e., sensicality judgments). However, it remains unclear whether intermediate levels of temporal demand are sufficient to activate the MTL. To address this, we conducted three experiments in which participants categorized the same set of temporal words based on their relation to living entities (Experiment 1), space (Experiment 2), and general time (Experiment 3). In individual analyses of the experiments, the space-time congruency effect was absent in Experiment 1. In Experiment 2, the effect emerged in reaction times but not in accuracy. In Experiment 3, it was observed in both measures. Subsequent comparisons across experiments suggested reliable differences between Experiments 2 and 3 in reaction times and between Experiment 3 and the other two experiments in accuracy. Our results provide evidence that MTL activation depends on the level of temporal demand required by the task. The findings support the notion that mental representations are context-sensitive rather than fixed.</p>","PeriodicalId":32728,"journal":{"name":"Journal of Cognition","volume":"8 1","pages":"37"},"PeriodicalIF":0.0,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12124278/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144200249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-23eCollection Date: 2025-01-01DOI: 10.5334/joc.447
Bennett L Schwartz, Anne M Cleary
The Doctrine of Concordance is the implicit assumption that cognitive processes, behavior, and phenomenological experience are highly correlated (Tulving, 1989). Tulving challenged this assumption, pointing to domains in which conscious experience did not accompany a particular measured cognitive process and to situations in which consciousness did not correlate with the observable behavior. Schwartz (1999) extended this view, asserting that the underlying cognitive processes that produce conscious experience may differ from those that produce observable behavior. Though research on conscious experience blossomed during the last quarter century and progress has been made in moving past the Doctrine of Concordance, we argue that some subdomains within memory research remain hampered by an implicit endorsement of it. We outline two areas of memory research in which current research and interpretations appear to fall prey to the Doctrine today: research on the dual- vs. single-process theory in recognition memory, including work on remember/know judgments, and research on retrospective memory confidence. We then describe four areas of research that show progress in understanding conscious experience by rejecting the Doctrine of Concordance: These are 1) metacognitive disconnects in the science of learning, 2) recognition illusions, 3) déjà vu experiences, and 4) aha experiences. We claim that there is often a dissociation between the mechanisms that create conscious experience and the underlying cognitive processes that contribute to behaviors, which may seem causally correlated with conscious experience. Disentangling the relations between process, behavior, and conscious experience in the human mind's operation are important to understanding it.
{"title":"Tulving's (1989) Doctrine of Concordance Revisited.","authors":"Bennett L Schwartz, Anne M Cleary","doi":"10.5334/joc.447","DOIUrl":"10.5334/joc.447","url":null,"abstract":"<p><p>The Doctrine of Concordance is the implicit assumption that cognitive processes, behavior, and phenomenological experience are highly correlated (Tulving, 1989). Tulving challenged this assumption, pointing to domains in which conscious experience did not accompany a particular measured cognitive process and to situations in which consciousness did not correlate with the observable behavior. Schwartz (1999) extended this view, asserting that the underlying cognitive processes that produce conscious experience may differ from those that produce observable behavior. Though research on conscious experience blossomed during the last quarter century and progress has been made in moving past the Doctrine of Concordance, we argue that some subdomains within memory research remain hampered by an implicit endorsement of it. We outline two areas of memory research in which current research and interpretations appear to fall prey to the Doctrine today: research on the dual- vs. single-process theory in recognition memory, including work on remember/know judgments, and research on retrospective memory confidence. We then describe four areas of research that show progress in understanding conscious experience by rejecting the Doctrine of Concordance: These are 1) metacognitive disconnects in the science of learning, 2) recognition illusions, 3) déjà vu experiences, and 4) aha experiences. We claim that there is often a dissociation between the mechanisms that create conscious experience and the underlying cognitive processes that contribute to behaviors, which may seem causally correlated with conscious experience. Disentangling the relations between process, behavior, and conscious experience in the human mind's operation are important to understanding it.</p>","PeriodicalId":32728,"journal":{"name":"Journal of Cognition","volume":"8 1","pages":"36"},"PeriodicalIF":0.0,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12101318/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144143109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-28eCollection Date: 2025-01-01DOI: 10.5334/joc.442
Ali Pournaghdali, Bennett L Schwartz, Fabian A Soto
In this study, we used a multidimensional extension of signal detection theory called general recognition theory (GRT) to evaluate the influence of tip-of-the-tongue states (TOT) and feeling-of-knowing (FOK) experiences on the metacognitive sensitivity of recognition confidence judgments. In two experiments, we asked participants to recall names of famous individuals (Experiment 1) or to recall correct answers to a series of general-knowledge questions (Experiment 2). If recall failed for any trial, participants provided metacognitive judgments of TOT and FOK, memory recognition responses, and metacognitive judgments of confidence on those recognition responses. To evaluate the influence of TOT and FOK on the metacognitive sensitivity of confidence judgments, we fit two different GRT models and constructed two sensitivity vs. metacognition curves, each representing changes in metacognitive sensitivity of confidence, as a function of the strength of TOT or FOK. The results showed that experiencing a TOT or a high FOK is associated with an increase in metacognitive sensitivity of confidence judgments. These results are the first report of influence of TOT and FOK on metacognitive sensitivity of confidence.
{"title":"Tip-of-the-Tongue and Feeling-of-Knowing Experiences Enhance Metacognitive Sensitivity of Confidence Evaluation of Semantic Memory.","authors":"Ali Pournaghdali, Bennett L Schwartz, Fabian A Soto","doi":"10.5334/joc.442","DOIUrl":"https://doi.org/10.5334/joc.442","url":null,"abstract":"<p><p>In this study, we used a multidimensional extension of signal detection theory called general recognition theory (GRT) to evaluate the influence of tip-of-the-tongue states (TOT) and feeling-of-knowing (FOK) experiences on the metacognitive sensitivity of recognition confidence judgments. In two experiments, we asked participants to recall names of famous individuals (Experiment 1) or to recall correct answers to a series of general-knowledge questions (Experiment 2). If recall failed for any trial, participants provided metacognitive judgments of TOT and FOK, memory recognition responses, and metacognitive judgments of confidence on those recognition responses. To evaluate the influence of TOT and FOK on the metacognitive sensitivity of confidence judgments, we fit two different GRT models and constructed two sensitivity vs. metacognition curves, each representing changes in metacognitive sensitivity of confidence, as a function of the strength of TOT or FOK. The results showed that experiencing a TOT or a high FOK is associated with an increase in metacognitive sensitivity of confidence judgments. These results are the first report of influence of TOT and FOK on metacognitive sensitivity of confidence.</p>","PeriodicalId":32728,"journal":{"name":"Journal of Cognition","volume":"8 1","pages":"33"},"PeriodicalIF":0.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12047626/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144027130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-28eCollection Date: 2025-01-01DOI: 10.5334/joc.443
Ning Mei, David Soto
The development of robust frameworks to understand how the human brain represents conscious and unconscious perceptual contents is paramount to make progress in the neuroscience of consciousness. Recent functional MRI studies using multi-voxel pattern classification analyses showed that unconscious contents could be decoded from brain activity patterns. However, decoding does not imply a full understanding of neural representations. Here we re-analysed data from a high-precision fMRI study coupled with representational similarity analysis based on convolutional neural network models to provide a detailed information-based approach to neural representations of both unconscious and conscious perceptual content. The results showed that computer vision model representations strongly predicted brain responses in ventral visual cortex and in fronto-parietal regions to both conscious and unconscious contents. Moreover, this pattern of results generalised when the models were trained and tested with different participants. Remarkably, these observations results held even when the analysis was restricted to observers that showed null perceptual sensitivity. In light of the highly distributed brain representation of unconscious information, we suggest that the functional role of fronto-parietal cortex in conscious perception is unlikely to be related to the broadcasting of information, as proposed by the global neuronal workspace theory, and may instead relate to the generation of meta-representations as proposed by higher-order theories.
{"title":"Brain Representation in Conscious and Unconscious Vision.","authors":"Ning Mei, David Soto","doi":"10.5334/joc.443","DOIUrl":"https://doi.org/10.5334/joc.443","url":null,"abstract":"<p><p>The development of robust frameworks to understand how the human brain represents conscious and unconscious perceptual contents is paramount to make progress in the neuroscience of consciousness. Recent functional MRI studies using multi-voxel pattern classification analyses showed that unconscious contents could be decoded from brain activity patterns. However, decoding does not imply a full understanding of neural representations. Here we re-analysed data from a high-precision fMRI study coupled with representational similarity analysis based on convolutional neural network models to provide a detailed information-based approach to neural representations of both unconscious and conscious perceptual content. The results showed that computer vision model representations strongly predicted brain responses in ventral visual cortex and in fronto-parietal regions to both conscious and unconscious contents. Moreover, this pattern of results generalised when the models were trained and tested with different participants. Remarkably, these observations results held even when the analysis was restricted to observers that showed null perceptual sensitivity. In light of the highly distributed brain representation of unconscious information, we suggest that the functional role of fronto-parietal cortex in conscious perception is unlikely to be related to the broadcasting of information, as proposed by the global neuronal workspace theory, and may instead relate to the generation of meta-representations as proposed by higher-order theories.</p>","PeriodicalId":32728,"journal":{"name":"Journal of Cognition","volume":"8 1","pages":"34"},"PeriodicalIF":0.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12047638/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144049592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-28eCollection Date: 2025-01-01DOI: 10.5334/joc.446
Christophe Cauchi, Martijn Meeter
In adult readers, the perceptual span is approximately 14-15 characters to the right of the fixated word, corresponding to approximately 5° of visual angle. However, the extent of information processing within this area remains unclear. In the present study, we address this question using a novel adaptation of the flankers task in which the eccentricity of the flankers with respect to the central target word is increased. Fifty-four participants performed a lexical decision task on a central four-letter word flanked by two words of equal length. The flankers were either orthographically related (rock - rock) or unrelated (path - rock) to the target, and their eccentricity varied from 1.65° to 4.29° (center-to-center) in 0.33° steps. Participants' fixation was controlled by an eye-tracker using the fixation point as a trigger, and stimuli were displayed for 170 ms to avoid any eye movement. Results showed that the effect of unrelated flankers decreased with increasing eccentricity, while there was no effect of eccentricity of related flankers. In particular, the unrelated flankers affected central word processing up to the end of the parafovea. This observation provides evidence that the outer limits of the parafovea are engaged beyond prelexical processing. Lexical frequency influenced the magnitude of both reaction times (RTs) and accuracy rates, but did not interact with any variables. This novel adaptation of the flankers task has potential advantages for investigating the spatial integration of orthographic information across the perceptual span.
{"title":"Tracking the Effects of Eccentricity on the Integration of Orthographic Information From Multiple Words.","authors":"Christophe Cauchi, Martijn Meeter","doi":"10.5334/joc.446","DOIUrl":"https://doi.org/10.5334/joc.446","url":null,"abstract":"<p><p>In adult readers, the perceptual span is approximately 14-15 characters to the right of the fixated word, corresponding to approximately 5° of visual angle. However, the extent of information processing within this area remains unclear. In the present study, we address this question using a novel adaptation of the flankers task in which the eccentricity of the flankers with respect to the central target word is increased. Fifty-four participants performed a lexical decision task on a central four-letter word flanked by two words of equal length. The flankers were either orthographically related (rock - rock) or unrelated (path - rock) to the target, and their eccentricity varied from 1.65° to 4.29° (center-to-center) in 0.33° steps. Participants' fixation was controlled by an eye-tracker using the fixation point as a trigger, and stimuli were displayed for 170 ms to avoid any eye movement. Results showed that the effect of unrelated flankers decreased with increasing eccentricity, while there was no effect of eccentricity of related flankers. In particular, the unrelated flankers affected central word processing up to the end of the parafovea. This observation provides evidence that the outer limits of the parafovea are engaged beyond prelexical processing. Lexical frequency influenced the magnitude of both reaction times (RTs) and accuracy rates, but did not interact with any variables. This novel adaptation of the flankers task has potential advantages for investigating the spatial integration of orthographic information across the perceptual span.</p>","PeriodicalId":32728,"journal":{"name":"Journal of Cognition","volume":"8 1","pages":"35"},"PeriodicalIF":0.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12047632/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144050652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-23eCollection Date: 2025-01-01DOI: 10.5334/joc.445
Boris New, Clément Guichet, Elsa Spinelli, Julien Barra
In this study, we investigated whether the visual "word height superiority illusion" (New et al., 2016) could be found in the auditory modality. In two experiments, participants listened to a word-word or word-pseudoword pair of the same or different intensity and judged whether one was louder than the other. They judged stimuli from their native language (L1) and second language (L2). In Experiment 1 with native French speakers, we found that words were perceived louder than pseudowords in the L1 (French) and the L2 (English). Moreover, the illusion was stronger in the L1 (French) than in the L2 (English). In Experiment 2 with native English speakers, we replicated the illusion both in the L1 (English) and the L2 (French) but to a similar extent. Overall, we replicated the visual word height superiority illusion in the auditory modality, which suggests that this may reflect a more general cognitive mechanism.
在本研究中,我们研究了视觉上的“单词高度优势错觉”(New et al., 2016)是否存在于听觉模态中。在两个实验中,参与者听了一组强度相同或不同的单词-单词或单词-伪单词,并判断其中一个是否比另一个更响亮。他们通过母语(L1)和第二语言(L2)来判断刺激。在以法语为母语的实验1中,我们发现在第一语言(法语)和第二语言(英语)中,单词被认为比假词更响。此外,这种错觉在第一语言(法语)中比在第二语言(英语)中更强烈。在以英语为母语的实验2中,我们在第一语言(英语)和第二语言(法语)中重复了这种错觉,但程度相似。总的来说,我们在听觉模式中复制了视觉单词高度优势错觉,这表明这可能反映了一种更普遍的认知机制。
{"title":"Listening to Foreign Languages: Pump Up the Volume!","authors":"Boris New, Clément Guichet, Elsa Spinelli, Julien Barra","doi":"10.5334/joc.445","DOIUrl":"https://doi.org/10.5334/joc.445","url":null,"abstract":"<p><p>In this study, we investigated whether the visual \"word height superiority illusion\" (New et al., 2016) could be found in the auditory modality. In two experiments, participants listened to a word-word or word-pseudoword pair of the same or different intensity and judged whether one was louder than the other. They judged stimuli from their native language (L1) and second language (L2). In Experiment 1 with native French speakers, we found that words were perceived louder than pseudowords in the L1 (French) and the L2 (English). Moreover, the illusion was stronger in the L1 (French) than in the L2 (English). In Experiment 2 with native English speakers, we replicated the illusion both in the L1 (English) and the L2 (French) but to a similar extent. Overall, we replicated the visual word height superiority illusion in the auditory modality, which suggests that this may reflect a more general cognitive mechanism.</p>","PeriodicalId":32728,"journal":{"name":"Journal of Cognition","volume":"8 1","pages":"32"},"PeriodicalIF":0.0,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12023143/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144041806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-21eCollection Date: 2025-01-01DOI: 10.5334/joc.444
Jannis Friedrich, Martin H Fischer, Markus Raab
The field of grounded cognition is concerned with how concepts are represented by re-activation of the bodily modalities. Considerable empirical work supports this core tenet, but the field is rife with meta-theoretical issues which prevent meaningfully progressing beyond this. We describe these issues and provide a solution: an overarching theoretical framework. The two most commonly cited grounded cognition theories are perceptual symbol systems and conceptual metaphor theory. Under perceptual symbol systems, concepts are represented by integrating fragments of multi-modal percepts in a simulator. Conceptual metaphor theory involves a limited number of image schemas, primitive structural regularities extracted from interaction with the environment, undergoing a limited number of transformations into a concept. Both theories constitute important developments to understanding mental representations, yet we argue that they currently impede progress because they are prematurely elaborate. This forces them to rely on overly specific assumptions, which generates a lack of conceptual clarity and unsystematic testing of empirical work. Our minimalist account takes grounded cognition 'back to basics' with a common-denominator framework supported by converging evidence from other fields. It postulates that concepts are represented by simulation, re-activating mental states that were active when experiencing this concept, and by metaphoric mapping, when concrete representations are sourced to represent abstract concepts. This enables incremental theory development without uncertain assumptions because it allows for descriptive research while nonetheless enabling falsification of theories. Our proposal provides the tools to resolve meta-theoretical issues and encourages a research program that integrates grounded cognition into the cognitive sciences.
{"title":"Issues in Grounded Cognition and How to Solve Them - the Minimalist Account.","authors":"Jannis Friedrich, Martin H Fischer, Markus Raab","doi":"10.5334/joc.444","DOIUrl":"https://doi.org/10.5334/joc.444","url":null,"abstract":"<p><p>The field of grounded cognition is concerned with how concepts are represented by re-activation of the bodily modalities. Considerable empirical work supports this core tenet, but the field is rife with meta-theoretical issues which prevent meaningfully progressing beyond this. We describe these issues and provide a solution: an overarching theoretical framework. The two most commonly cited grounded cognition theories are <i>perceptual symbol systems</i> and <i>conceptual metaphor theory</i>. Under perceptual symbol systems, concepts are represented by integrating fragments of multi-modal percepts in a simulator. Conceptual metaphor theory involves a limited number of image schemas, primitive structural regularities extracted from interaction with the environment, undergoing a limited number of transformations into a concept. Both theories constitute important developments to understanding mental representations, yet we argue that they currently impede progress because they are prematurely elaborate. This forces them to rely on overly specific assumptions, which generates a lack of conceptual clarity and unsystematic testing of empirical work. Our <i>minimalist account</i> takes grounded cognition 'back to basics' with a common-denominator framework supported by converging evidence from other fields. It postulates that concepts are represented by simulation, re-activating mental states that were active when experiencing this concept, and by metaphoric mapping, when concrete representations are sourced to represent abstract concepts. This enables incremental theory development without uncertain assumptions because it allows for descriptive research while nonetheless enabling falsification of theories. Our proposal provides the tools to resolve meta-theoretical issues and encourages a research program that integrates grounded cognition into the cognitive sciences.</p>","PeriodicalId":32728,"journal":{"name":"Journal of Cognition","volume":"8 1","pages":"31"},"PeriodicalIF":0.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12023178/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144040257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-17eCollection Date: 2025-01-01DOI: 10.5334/joc.439
Marius Barth, Christoph Stahl, Hilde Haider
Sequence learning in the serial response time task (SRTT) is one of few learning phenomena where researchers agree that such learning may proceed in the absence of awareness, while it is also possible to explicitly learn a sequence of events. In the past few decades, research into sequence learning largely focused on the type of representation that may underlie implicit sequence learning, and whether or not two independent learning systems are necessary to explain qualitative differences between implicit and explicit learning. Using the drift-diffusion model, here we take a cognitive-processes perspective on sequence learning and investigate the cognitive operations that benefit from implicit and explicit sequence learning (e.g., stimulus detection and encoding, response selection, and response execution). To separate the processes involved in expressing implicit versus explicit knowledge, we manipulated explicit sequence knowledge independently of the opportunity to express such knowledge, and analyzed the resulting performance data with a drift-diffusion model to disentangle the contributions of these sub-processes. Results revealed that implicit sequence learning does not affect stimulus processing, but benefits response selection. Moreover, beyond response selection, response execution was affected. Explicit sequence knowledge did not change this pattern if participants worked on probabilistic materials, where it is difficult to anticipate the next response. However, if materials were deterministic, explicit knowledge enabled participants to switch from stimulus-based to plan-based action control, which was reflected in ample changes in the cognitive processes involved in performing the task. First implications for theories of sequence learning, and how the diffusion model may be helpful in future research, are dicussed.
{"title":"How Implicit Sequence Learning and Explicit Sequence Knowledge Are Expressed in a Serial Response Time Task.","authors":"Marius Barth, Christoph Stahl, Hilde Haider","doi":"10.5334/joc.439","DOIUrl":"https://doi.org/10.5334/joc.439","url":null,"abstract":"<p><p>Sequence learning in the serial response time task (SRTT) is one of few learning phenomena where researchers agree that such learning may proceed in the absence of awareness, while it is also possible to explicitly learn a sequence of events. In the past few decades, research into sequence learning largely focused on the type of representation that may underlie implicit sequence learning, and whether or not two independent learning systems are necessary to explain qualitative differences between implicit and explicit learning. Using the drift-diffusion model, here we take a cognitive-processes perspective on sequence learning and investigate the cognitive operations that benefit from implicit and explicit sequence learning (e.g., stimulus detection and encoding, response selection, and response execution). To separate the processes involved in expressing implicit versus explicit knowledge, we manipulated explicit sequence knowledge independently of the opportunity to express such knowledge, and analyzed the resulting performance data with a drift-diffusion model to disentangle the contributions of these sub-processes. Results revealed that implicit sequence learning does not affect stimulus processing, but benefits response selection. Moreover, beyond response selection, response execution was affected. Explicit sequence knowledge did not change this pattern if participants worked on probabilistic materials, where it is difficult to anticipate the next response. However, if materials were deterministic, explicit knowledge enabled participants to switch from stimulus-based to plan-based action control, which was reflected in ample changes in the cognitive processes involved in performing the task. First implications for theories of sequence learning, and how the diffusion model may be helpful in future research, are dicussed.</p>","PeriodicalId":32728,"journal":{"name":"Journal of Cognition","volume":"8 1","pages":"30"},"PeriodicalIF":0.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12013281/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144056721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}