Pub Date : 2024-08-26DOI: 10.3758/s13423-024-02566-5
Jedidiah W Whitridge, Chris A Clark, Kathleen L Hourihan, Jonathan M Fawcett
The production effect refers to the finding that participants better remember items read aloud than items read silently. This pattern has been attributed to aloud items being relatively more distinctive in memory than silent items, owing to the integration of additional sensorimotor features within the encoding episode that are thought to facilitate performance at test. Other theorists have instead argued that producing an item encourages additional forms of processing not limited to production itself. We tested this hypothesis using a modified production task where participants named monochromatic line drawings aloud or silently either by generating the names themselves (no label condition) or reading a provided label (label condition). During a later test, participants were presented with each line drawing a second time and required to reproduce the original color and location using a continuous slider. Production was found to improve memory for visual features, but only when participants were required to generate the label themselves. Our findings support the notion that picture naming improves memory for visual features; however, this benefit appears to be driven by factors related to response generation rather than production itself.
{"title":"Generation (not production) improves the fidelity of visual representations in picture naming.","authors":"Jedidiah W Whitridge, Chris A Clark, Kathleen L Hourihan, Jonathan M Fawcett","doi":"10.3758/s13423-024-02566-5","DOIUrl":"https://doi.org/10.3758/s13423-024-02566-5","url":null,"abstract":"<p><p>The production effect refers to the finding that participants better remember items read aloud than items read silently. This pattern has been attributed to aloud items being relatively more distinctive in memory than silent items, owing to the integration of additional sensorimotor features within the encoding episode that are thought to facilitate performance at test. Other theorists have instead argued that producing an item encourages additional forms of processing not limited to production itself. We tested this hypothesis using a modified production task where participants named monochromatic line drawings aloud or silently either by generating the names themselves (no label condition) or reading a provided label (label condition). During a later test, participants were presented with each line drawing a second time and required to reproduce the original color and location using a continuous slider. Production was found to improve memory for visual features, but only when participants were required to generate the label themselves. Our findings support the notion that picture naming improves memory for visual features; however, this benefit appears to be driven by factors related to response generation rather than production itself.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142056329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.3758/s13423-024-02551-y
Chris Westbury, Michelle Yang, Kris Anderson
Osgood, Suci, and Tannebaum were the first to attempt to identify the principal components of semantics using dimensional reduction of a high-dimensional model of semantics constructed from human judgments of word relatedness. Modern word-embedding models analyze patterns of words to construct higher dimensional models of semantics that can be similarly subjected to dimensional reduction. Hollis and Westbury characterized the first eight principal components (PCs) of a word-embedding model by correlating them with several well-known lexical measures, such as logged word frequency, age of acquisition, valence, arousal, dominance, and concreteness. The results show some clear differentiation of interpretation between the PCs. Here, we extend this work by analyzing a larger word-embedding matrix using semantic measures initially derived from subjective inspection of the PCs. We then use quantitative analysis to confirm the utility of these subjective measures for predicting PC values and cross-validate them on two word-embedding matrices developed on distinct corpora. Several semantic and word class measures are strongly predictive of early PC values, including first-person and second-person verbs, personal relevance of abstract and concrete words, affect terms, and names of places and people. The predictors of the lowest magnitude PCs generalized well to word-embedding matrices constructed from separate corpora, including matrices constructed using different word-embedding methods. The predictive categories we describe are consistent with Wittgenstein's argument that an autonomous level of social interaction grounds linguistic meaning.
Osgood、Suci 和 Tannebaum 是第一个尝试通过对人类对词语相关性的判断所构建的高维语义模型进行降维处理来识别语义主成分的人。现代词嵌入模型通过分析词的模式来构建语义的高维模型,这些模型同样可以进行降维处理。霍利斯和韦斯特伯里通过将词嵌入模型的前八个主成分(PCs)与几种著名的词汇测量方法(如记录词频、习得年龄、价值、唤醒、支配和具体性)相关联,确定了它们的特征。结果表明 PC 之间的解释有明显的区别。在此,我们利用最初从 PC 的主观检查中得出的语义测量结果,对更大的词嵌入矩阵进行分析,从而扩展了这项工作。然后,我们使用定量分析来确认这些主观测量值对预测 PC 值的实用性,并在两个基于不同语料库开发的词语嵌入矩阵上对它们进行交叉验证。一些语义和词类指标对早期 PC 值具有很强的预测作用,包括第一人称和第二人称动词、抽象和具体词语的个人相关性、情感术语以及地名和人名。最低 PC 值的预测因子对由不同语料库构建的词嵌入矩阵(包括使用不同词嵌入方法构建的矩阵)具有很好的通用性。我们描述的预测类别与维特根斯坦的论点一致,即社会互动的自主水平是语言意义的基础。
{"title":"The principal components of meaning, revisited.","authors":"Chris Westbury, Michelle Yang, Kris Anderson","doi":"10.3758/s13423-024-02551-y","DOIUrl":"https://doi.org/10.3758/s13423-024-02551-y","url":null,"abstract":"<p><p>Osgood, Suci, and Tannebaum were the first to attempt to identify the principal components of semantics using dimensional reduction of a high-dimensional model of semantics constructed from human judgments of word relatedness. Modern word-embedding models analyze patterns of words to construct higher dimensional models of semantics that can be similarly subjected to dimensional reduction. Hollis and Westbury characterized the first eight principal components (PCs) of a word-embedding model by correlating them with several well-known lexical measures, such as logged word frequency, age of acquisition, valence, arousal, dominance, and concreteness. The results show some clear differentiation of interpretation between the PCs. Here, we extend this work by analyzing a larger word-embedding matrix using semantic measures initially derived from subjective inspection of the PCs. We then use quantitative analysis to confirm the utility of these subjective measures for predicting PC values and cross-validate them on two word-embedding matrices developed on distinct corpora. Several semantic and word class measures are strongly predictive of early PC values, including first-person and second-person verbs, personal relevance of abstract and concrete words, affect terms, and names of places and people. The predictors of the lowest magnitude PCs generalized well to word-embedding matrices constructed from separate corpora, including matrices constructed using different word-embedding methods. The predictive categories we describe are consistent with Wittgenstein's argument that an autonomous level of social interaction grounds linguistic meaning.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142036766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.3758/s13423-024-02557-6
Giorgia Anceresi, Daniele Gatti, Tomaso Vecchi, Marco Marelli, Luca Rinaldi
Different experiential traces (i.e., linguistic, motor, and perceptual) are likely contributing to the organization of human semantic knowledge. Here, we aimed to address this issue by investigating whether visual experience may affect the sensitivity to distributional priors from natural language. We conducted an independent reanalysis of data from Bottini et al., in which early blind and sighted participants performed an auditory lexical decision task. Since previous research has shown that semantic neighborhood density-the mean distance between a target word and its closest semantic neighbors-can influence performance in lexical decision tasks, we investigated whether vision may alter the reliance on this semantic index. We demonstrate that early blind participants are more sensitive to semantic neighborhood density than sighted participants, as indicated by the significantly faster response times for words with higher levels of semantic neighborhood density shown by the blind group. These findings suggest that an early lack of visual experience may lead to enhanced sensitivity to the distributional history of words in natural language, deepening in turn our understanding of the strict interplay between linguistic and perceptual experience in the organization of conceptual knowledge.
{"title":"Visual experience modulates the sensitivity to the distributional history of words in natural language.","authors":"Giorgia Anceresi, Daniele Gatti, Tomaso Vecchi, Marco Marelli, Luca Rinaldi","doi":"10.3758/s13423-024-02557-6","DOIUrl":"10.3758/s13423-024-02557-6","url":null,"abstract":"<p><p>Different experiential traces (i.e., linguistic, motor, and perceptual) are likely contributing to the organization of human semantic knowledge. Here, we aimed to address this issue by investigating whether visual experience may affect the sensitivity to distributional priors from natural language. We conducted an independent reanalysis of data from Bottini et al., in which early blind and sighted participants performed an auditory lexical decision task. Since previous research has shown that semantic neighborhood density-the mean distance between a target word and its closest semantic neighbors-can influence performance in lexical decision tasks, we investigated whether vision may alter the reliance on this semantic index. We demonstrate that early blind participants are more sensitive to semantic neighborhood density than sighted participants, as indicated by the significantly faster response times for words with higher levels of semantic neighborhood density shown by the blind group. These findings suggest that an early lack of visual experience may lead to enhanced sensitivity to the distributional history of words in natural language, deepening in turn our understanding of the strict interplay between linguistic and perceptual experience in the organization of conceptual knowledge.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7616517/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142036767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-21DOI: 10.3758/s13423-024-02559-4
Patrick A F Laing, Bram Vervliet, Joseph E Dunsmoor, Ben J Harrison
Safety learning involves associating stimuli with the absence of threats, enabling the inhibition of fear and anxiety. Despite growing interest in psychology, psychiatry, and neuroscience, safety learning lacks a formal consensus definition, leading to inconsistent methodologies and varied results. Conceptualized as a form of inhibitory learning (conditioned inhibition), safety learning can be understood through formal learning theories, such as the Rescorla-Wagner and Pearce-Hall models. This review aims to establish a principled conceptualization of 'Pavlovian safety learning', identifying cognitive mechanisms that generate safety and the boundary conditions that constrain it. Based on these observations, we define Pavlovian safety learning as an active associative process, where surprising threat-omission (safety prediction error) acts as a salient reinforcing event. Instead of producing merely neutral or nonaversive states, safety learning endows stimuli with active positive associations to 'safety'. The resulting stimulus-safety memories counteract the influence of fear memories, promoting fear regulation, positive affect, and relief. We critically analyze traditional criteria of conditioned inhibition for their relevance to safety and propose areas for future innovation. A principled concept of Pavlovian safety learning may reduce methodological inconsistencies, stimulate translational research, and facilitate a comprehensive understanding of an indispensable psychological construct.
{"title":"Pavlovian safety learning: An integrative theoretical review.","authors":"Patrick A F Laing, Bram Vervliet, Joseph E Dunsmoor, Ben J Harrison","doi":"10.3758/s13423-024-02559-4","DOIUrl":"https://doi.org/10.3758/s13423-024-02559-4","url":null,"abstract":"<p><p>Safety learning involves associating stimuli with the absence of threats, enabling the inhibition of fear and anxiety. Despite growing interest in psychology, psychiatry, and neuroscience, safety learning lacks a formal consensus definition, leading to inconsistent methodologies and varied results. Conceptualized as a form of inhibitory learning (conditioned inhibition), safety learning can be understood through formal learning theories, such as the Rescorla-Wagner and Pearce-Hall models. This review aims to establish a principled conceptualization of 'Pavlovian safety learning', identifying cognitive mechanisms that generate safety and the boundary conditions that constrain it. Based on these observations, we define Pavlovian safety learning as an active associative process, where surprising threat-omission (safety prediction error) acts as a salient reinforcing event. Instead of producing merely neutral or nonaversive states, safety learning endows stimuli with active positive associations to 'safety'. The resulting stimulus-safety memories counteract the influence of fear memories, promoting fear regulation, positive affect, and relief. We critically analyze traditional criteria of conditioned inhibition for their relevance to safety and propose areas for future innovation. A principled concept of Pavlovian safety learning may reduce methodological inconsistencies, stimulate translational research, and facilitate a comprehensive understanding of an indispensable psychological construct.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142018424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-15DOI: 10.3758/s13423-024-02558-5
Hanshu Zhang, Peng-Fei Zhu, Cheng-Ta Yang
In practical visual search fields, observers often encounter errors that result from an unknown number of targets, which may induce reduced accuracy and speed. Our current study addresses the potential enhancement of collaborative search efficiency as a dyad to mitigate such incurred search costs. Utilizing the capacity coefficient, we evaluated search efficiency and explored the interplay of task difficulty and termination rule in collaborative visual search. Our prediction that collaborative benefits increased with elevated task difficulty was not supported in Experiment 1, where participants were tasked with judging the presence of any target. In contrast, Experiment 2 demonstrated that dyads exhibited greater search efficiency during exhaustive searches for multiple targets with elevated task difficulty. Notably, our findings indicated an advantage in dyad searches compared to baseline predictions from individual searches. Our results underscored the significance of task difficulty and termination rules in leveraging human resources for improved collaborative visual search performance.
{"title":"Group efficiency based on the termination rule in the multiple-targets visual search task.","authors":"Hanshu Zhang, Peng-Fei Zhu, Cheng-Ta Yang","doi":"10.3758/s13423-024-02558-5","DOIUrl":"https://doi.org/10.3758/s13423-024-02558-5","url":null,"abstract":"<p><p>In practical visual search fields, observers often encounter errors that result from an unknown number of targets, which may induce reduced accuracy and speed. Our current study addresses the potential enhancement of collaborative search efficiency as a dyad to mitigate such incurred search costs. Utilizing the capacity coefficient, we evaluated search efficiency and explored the interplay of task difficulty and termination rule in collaborative visual search. Our prediction that collaborative benefits increased with elevated task difficulty was not supported in Experiment 1, where participants were tasked with judging the presence of any target. In contrast, Experiment 2 demonstrated that dyads exhibited greater search efficiency during exhaustive searches for multiple targets with elevated task difficulty. Notably, our findings indicated an advantage in dyad searches compared to baseline predictions from individual searches. Our results underscored the significance of task difficulty and termination rules in leveraging human resources for improved collaborative visual search performance.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141988733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-13DOI: 10.3758/s13423-024-02554-9
Dillon H Murphy
In our everyday lives, we must remember important information, especially if there are consequences for forgetting. In this review, I discuss recent work on responsible remembering: the strategic and effortful prioritization of important information with consequences for forgetting. Thus far, research regarding responsible remembering has revealed several key factors and mechanisms that work together to enhance memory for important information that will continue to be refined: the identification and selection of what to remember (metacognitive reflectivity), the forgetting of less important information to facilitate memory for items that do need to be remembered (responsible forgetting), the functional prioritization of attention at the expense of competing factors (responsible attention), and the selective recall of important information via efficient retrieval strategies (responsible retrieval). Together, these functions form a cohesive system that aims to selectively prioritize, encode, and recall information that is deemed important based on its anticipated utility or the consequences of forgetting, and considering the importance of information may be a critical memory adaptation as we age. Specifically, if younger and older adults learn to self-assess and prioritize important information that has negative consequences if forgotten, engage in strategic forgetting, efficiently allocate their attentional resources, and utilize effective retrieval operations, memory for said important information can be enhanced.
{"title":"Responsible remembering: The role of metacognition, forgetting, attention, and retrieval in adaptive memory.","authors":"Dillon H Murphy","doi":"10.3758/s13423-024-02554-9","DOIUrl":"https://doi.org/10.3758/s13423-024-02554-9","url":null,"abstract":"<p><p>In our everyday lives, we must remember important information, especially if there are consequences for forgetting. In this review, I discuss recent work on responsible remembering: the strategic and effortful prioritization of important information with consequences for forgetting. Thus far, research regarding responsible remembering has revealed several key factors and mechanisms that work together to enhance memory for important information that will continue to be refined: the identification and selection of what to remember (metacognitive reflectivity), the forgetting of less important information to facilitate memory for items that do need to be remembered (responsible forgetting), the functional prioritization of attention at the expense of competing factors (responsible attention), and the selective recall of important information via efficient retrieval strategies (responsible retrieval). Together, these functions form a cohesive system that aims to selectively prioritize, encode, and recall information that is deemed important based on its anticipated utility or the consequences of forgetting, and considering the importance of information may be a critical memory adaptation as we age. Specifically, if younger and older adults learn to self-assess and prioritize important information that has negative consequences if forgotten, engage in strategic forgetting, efficiently allocate their attentional resources, and utilize effective retrieval operations, memory for said important information can be enhanced.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141976476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-09DOI: 10.3758/s13423-024-02552-x
Svetlana Pinet, Clara D Martin
Literate adults are able to produce the same word in different language modalities-for instance, through speaking and writing. Yet how speaking and writing interact is not well understood. The present study takes a new perspective on the question of the co-activation of phonological and orthographic representations in speaking and writing by examining the acquisition of novel words. We tested how novel words get integrated into modality-specific lexicons by biasing novel word acquisition toward speaking or writing and assessing cross-modal transfer at the first stages of learning. Participants learned novel words paired with pictures of novel objects and practiced them overtly through speaking or typing. At test, typed training led to higher recall accuracy than spoken training whether words were recalled through typing or speaking. Performance in typing (RT and durations) benefited more from typed than spoken training. Crucially, performance in speaking did not benefit specifically from spoken training and was similar after spoken or typed training. Results are compatible with an asymmetric integration in the phonological and orthographic lexicons according to the modality of training, with representations created in the orthographic lexicon directly transferring to the phonological lexicon, while the opposite doesn't seem to occur. Cross-modal transfer dynamics are discussed according to the level of lexical activation.
{"title":"Cross-modal interactions in language production: evidence from word learning.","authors":"Svetlana Pinet, Clara D Martin","doi":"10.3758/s13423-024-02552-x","DOIUrl":"https://doi.org/10.3758/s13423-024-02552-x","url":null,"abstract":"<p><p>Literate adults are able to produce the same word in different language modalities-for instance, through speaking and writing. Yet how speaking and writing interact is not well understood. The present study takes a new perspective on the question of the co-activation of phonological and orthographic representations in speaking and writing by examining the acquisition of novel words. We tested how novel words get integrated into modality-specific lexicons by biasing novel word acquisition toward speaking or writing and assessing cross-modal transfer at the first stages of learning. Participants learned novel words paired with pictures of novel objects and practiced them overtly through speaking or typing. At test, typed training led to higher recall accuracy than spoken training whether words were recalled through typing or speaking. Performance in typing (RT and durations) benefited more from typed than spoken training. Crucially, performance in speaking did not benefit specifically from spoken training and was similar after spoken or typed training. Results are compatible with an asymmetric integration in the phonological and orthographic lexicons according to the modality of training, with representations created in the orthographic lexicon directly transferring to the phonological lexicon, while the opposite doesn't seem to occur. Cross-modal transfer dynamics are discussed according to the level of lexical activation.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141907578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-07DOI: 10.3758/s13423-024-02541-0
Craig A Thorburn, Ellen Lau, Naomi H Feldman
Adults struggle to learn non-native speech categories in many experimental settings (Goto, Neuropsychologia, 9(3), 317-323 1971), but learn efficiently in a video game paradigm where non-native speech sounds have functional significance (Lim & Holt, Cognitive Science, 35(7), 1390-1405 2011). Behavioral and neural evidence from this and other paradigms point toward the involvement of reinforcement learning mechanisms in speech category learning (Harmon, Idemaru, & Kapatsinski, Cognition, 189, 76-88 2019; Lim, Fiez, & Holt, Proceedings of the National Academy of Sciences, 116, 201811992 2019). We formalize this hypothesis computationally and implement a deep reinforcement learning network to map between environmental input and actions. Comparing to a supervised model of learning, we show that the reinforcement network closely matches aspects of human behavior in two experiments - learning of synthesized auditory noise tokens and improvement in speech sound discrimination. Both models perform comparably and the similarity in the output of each model leads us to believe that there is little inherent computational benefit to a reward-based learning mechanism. We suggest that the specific neural circuitry engaged by the paradigm and links between striatum and superior temporal areas play a critical role in effective learning.
{"title":"Exploring the effectiveness of reward-based learning strategies for second-language speech sounds.","authors":"Craig A Thorburn, Ellen Lau, Naomi H Feldman","doi":"10.3758/s13423-024-02541-0","DOIUrl":"https://doi.org/10.3758/s13423-024-02541-0","url":null,"abstract":"<p><p>Adults struggle to learn non-native speech categories in many experimental settings (Goto, Neuropsychologia, 9(3), 317-323 1971), but learn efficiently in a video game paradigm where non-native speech sounds have functional significance (Lim & Holt, Cognitive Science, 35(7), 1390-1405 2011). Behavioral and neural evidence from this and other paradigms point toward the involvement of reinforcement learning mechanisms in speech category learning (Harmon, Idemaru, & Kapatsinski, Cognition, 189, 76-88 2019; Lim, Fiez, & Holt, Proceedings of the National Academy of Sciences, 116, 201811992 2019). We formalize this hypothesis computationally and implement a deep reinforcement learning network to map between environmental input and actions. Comparing to a supervised model of learning, we show that the reinforcement network closely matches aspects of human behavior in two experiments - learning of synthesized auditory noise tokens and improvement in speech sound discrimination. Both models perform comparably and the similarity in the output of each model leads us to believe that there is little inherent computational benefit to a reward-based learning mechanism. We suggest that the specific neural circuitry engaged by the paradigm and links between striatum and superior temporal areas play a critical role in effective learning.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141902720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigated the contribution of multisensory predictions to body ownership, and beyond, to the integration of body-related signals. Contrary to the prevailing idea, according to which, to be integrated, cues necessarily have to be perceived simultaneously, we instead proposed the prediction-confirmation account. According to this account, a perceived cue can be integrated with a predicted cue as long as both signals are relatively simultaneous. To test this hypothesis, a standard rubber hand illusion (RHI) paradigm was used. In the first part of each trial, the illusion was induced while participants observed the rubber hand being touched with a paintbrush. In the subsequent part of the trial, (i) both rubber hand and the participant's real hand were stroked as before (i.e., visible/synchronous condition), (ii) the rubber hand was not stroke anymore (i.e., visible/tactile-only condition), or (iii) both rubber hand and the participant's real hand were synchronously stroked while the location where the rubber hand was touched was occulted (i.e., occulted/synchronous condition). However, in this latter condition, participants still perceived the approaching movement of the paintbrush. Thus, based on this visual cue, the participants can properly predict the timepoint at which the tactile cue should occur (i.e., visuotactile predictions). Our major finding was that compared with the visible/tactile-only condition, the occulted/synchronous condition did not exhibit a decrease of the RHI as in the visible/synchronous condition. This finding supports the prediction-confirmation account and suggests that this mechanism operates even in the standard version of the RHI.
{"title":"The prediction-confirmation account of the sense of body ownership: Evidence from a rubber hand illusion paradigm.","authors":"Loïc P Heurley, Léa Obrecht, Hélène Vanborren, Fleur Touzard, Thibaut Brouillet","doi":"10.3758/s13423-024-02553-w","DOIUrl":"https://doi.org/10.3758/s13423-024-02553-w","url":null,"abstract":"<p><p>We investigated the contribution of multisensory predictions to body ownership, and beyond, to the integration of body-related signals. Contrary to the prevailing idea, according to which, to be integrated, cues necessarily have to be perceived simultaneously, we instead proposed the prediction-confirmation account. According to this account, a perceived cue can be integrated with a predicted cue as long as both signals are relatively simultaneous. To test this hypothesis, a standard rubber hand illusion (RHI) paradigm was used. In the first part of each trial, the illusion was induced while participants observed the rubber hand being touched with a paintbrush. In the subsequent part of the trial, (i) both rubber hand and the participant's real hand were stroked as before (i.e., visible/synchronous condition), (ii) the rubber hand was not stroke anymore (i.e., visible/tactile-only condition), or (iii) both rubber hand and the participant's real hand were synchronously stroked while the location where the rubber hand was touched was occulted (i.e., occulted/synchronous condition). However, in this latter condition, participants still perceived the approaching movement of the paintbrush. Thus, based on this visual cue, the participants can properly predict the timepoint at which the tactile cue should occur (i.e., visuotactile predictions). Our major finding was that compared with the visible/tactile-only condition, the occulted/synchronous condition did not exhibit a decrease of the RHI as in the visible/synchronous condition. This finding supports the prediction-confirmation account and suggests that this mechanism operates even in the standard version of the RHI.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141894171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-05DOI: 10.3758/s13423-024-02547-8
Sophie Desjardins, Rui Tang, Seffie Yip, Mathieu Roy, A Ross Otto
When given a choice, people will avoid cognitively effortful courses of action because the experience of effort is evaluated as aversive and costly. At the same time, a body of work spanning psychology, economics, and neuroscience suggests that goods, actions, and experiences are often evaluated in the context in which they are encountered, rather in absolute terms. To probe the extent to which the evaluation of cognitive effort is also context-dependent, we had participants learn associations between unique stimuli and subjective demand levels across low-demand and high-demand contexts. We probed demand preferences and subjective evaluation using a forced-choice paradigm as well by examining effort ratings, taken both on-line (during learning) and off-line (after choice). When choosing between two stimuli objectively identical in terms of demand, participants showed a clear preference for the stimulus learned in the low- versus high-demand context and rated this stimulus as more subjectively effortful than the low-demand context in on-line but not off-line ratings, suggesting an assimilation effect. Finally, we observed that the extent to which individual participants who exhibited stronger assimilation effects in off-line demand ratings were more likely to manifest an assimilation effect in demand preferences. Broadly, our findings suggest that effort evaluations occur in a context-dependent manner and are specifically assimilated to the broader context in which they occur.
{"title":"Context effects in cognitive effort evaluation.","authors":"Sophie Desjardins, Rui Tang, Seffie Yip, Mathieu Roy, A Ross Otto","doi":"10.3758/s13423-024-02547-8","DOIUrl":"https://doi.org/10.3758/s13423-024-02547-8","url":null,"abstract":"<p><p>When given a choice, people will avoid cognitively effortful courses of action because the experience of effort is evaluated as aversive and costly. At the same time, a body of work spanning psychology, economics, and neuroscience suggests that goods, actions, and experiences are often evaluated in the context in which they are encountered, rather in absolute terms. To probe the extent to which the evaluation of cognitive effort is also context-dependent, we had participants learn associations between unique stimuli and subjective demand levels across low-demand and high-demand contexts. We probed demand preferences and subjective evaluation using a forced-choice paradigm as well by examining effort ratings, taken both on-line (during learning) and off-line (after choice). When choosing between two stimuli objectively identical in terms of demand, participants showed a clear preference for the stimulus learned in the low- versus high-demand context and rated this stimulus as more subjectively effortful than the low-demand context in on-line but not off-line ratings, suggesting an assimilation effect. Finally, we observed that the extent to which individual participants who exhibited stronger assimilation effects in off-line demand ratings were more likely to manifest an assimilation effect in demand preferences. Broadly, our findings suggest that effort evaluations occur in a context-dependent manner and are specifically assimilated to the broader context in which they occur.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141889992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}