Pub Date : 2024-10-01Epub Date: 2024-02-15DOI: 10.3758/s13423-024-02463-x
Hossein Karimi, Pete Weber, Jaden Zinn
It is well known that contextual predictability facilitates word identification, but it is less clear whether the uncertainty associated with the current context (i.e., its lexical entropy) influences sentence processing. On the one hand, high entropy contexts may lead to interference due to greater number of lexical competitors. On the other hand, predicting multiple lexical competitors may facilitate processing through the preactivation of shared semantic features. In this study, we examined whether entropy measured at the trial level (i.e., for each participant, for each item) corresponds to facilitatory or inhibitory effects. Trial-level entropy captures each individual's knowledge about specific contexts and is therefore a more valid and sensitive measure of entropy (relative to the commonly employed item-level entropy). Participants (N = 112) completed two experimental sessions (with counterbalanced orders) that were separated by a 3- to 14-day interval. In one session, they produced up to 10 completions for sentence fragments (N = 647). In another session, they read the same sentences including a target word (whose entropy value was calculated based on the produced completions) while reading times were measured. We observed a facilitatory (not inhibitory) effect of trial-level entropy on lexical processing over and above item-level measures of lexical predictability (including cloze probability, surprisal, and semantic constraint). Extra analyses revealed that greater semantic overlap between the target and the produced responses facilitated target processing. Thus, the results lend support to theories of lexical prediction maintaining that prediction involves broad activation of semantic features rather than activation of full lexical forms.
{"title":"Information entropy facilitates (not impedes) lexical processing during language comprehension.","authors":"Hossein Karimi, Pete Weber, Jaden Zinn","doi":"10.3758/s13423-024-02463-x","DOIUrl":"10.3758/s13423-024-02463-x","url":null,"abstract":"<p><p>It is well known that contextual predictability facilitates word identification, but it is less clear whether the uncertainty associated with the current context (i.e., its lexical entropy) influences sentence processing. On the one hand, high entropy contexts may lead to interference due to greater number of lexical competitors. On the other hand, predicting multiple lexical competitors may facilitate processing through the preactivation of shared semantic features. In this study, we examined whether entropy measured at the trial level (i.e., for each participant, for each item) corresponds to facilitatory or inhibitory effects. Trial-level entropy captures each individual's knowledge about specific contexts and is therefore a more valid and sensitive measure of entropy (relative to the commonly employed item-level entropy). Participants (N = 112) completed two experimental sessions (with counterbalanced orders) that were separated by a 3- to 14-day interval. In one session, they produced up to 10 completions for sentence fragments (N = 647). In another session, they read the same sentences including a target word (whose entropy value was calculated based on the produced completions) while reading times were measured. We observed a facilitatory (not inhibitory) effect of trial-level entropy on lexical processing over and above item-level measures of lexical predictability (including cloze probability, surprisal, and semantic constraint). Extra analyses revealed that greater semantic overlap between the target and the produced responses facilitated target processing. Thus, the results lend support to theories of lexical prediction maintaining that prediction involves broad activation of semantic features rather than activation of full lexical forms.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":"2102-2117"},"PeriodicalIF":4.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11472653/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139741836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-02-21DOI: 10.3758/s13423-024-02471-x
Jason K Chow, Thomas J Palmeri, Isabel Gauthier
People vary in their ability to recognize objects visually. Individual differences for matching and recognizing objects visually is supported by a domain-general ability capturing common variance across different tasks (e.g., Richler et al., Psychological Review, 126, 226-251, 2019). Behavioral (e.g., Cooke et al., Neuropsychologia, 45, 484-495, 2007) and neural evidence (e.g., Amedi, Cerebral Cortex, 12, 1202-1212, 2002) suggest overlapping mechanisms in the processing of visual and haptic information in the service of object recognition, but it is unclear whether such group-average results generalize to individual differences. Psychometrically validated measures are required, which have been lacking in the haptic modality. We investigate whether object recognition ability is specific to vision or extends to haptics using psychometric measures we have developed. We use multiple visual and haptic tests with different objects and different formats to measure domain-general visual and haptic abilities and to test for relations across them. We measured object recognition abilities using two visual tests and four haptic tests (two each for two kinds of haptic exploration) in 97 participants. Partial correlation and confirmatory factor analyses converge to support the existence of a domain-general haptic object recognition ability that is moderately correlated with domain-general visual object recognition ability. Visual and haptic abilities share about 25% of their variance, supporting the existence of a multisensory domain-general ability while leaving a substantial amount of residual variance for modality-specific abilities. These results extend our understanding of the structure of object recognition abilities; while there are mechanisms that may generalize across categories, tasks, and modalities, there are still other mechanisms that are distinct between modalities.
{"title":"Distinct but related abilities for visual and haptic object recognition.","authors":"Jason K Chow, Thomas J Palmeri, Isabel Gauthier","doi":"10.3758/s13423-024-02471-x","DOIUrl":"10.3758/s13423-024-02471-x","url":null,"abstract":"<p><p>People vary in their ability to recognize objects visually. Individual differences for matching and recognizing objects visually is supported by a domain-general ability capturing common variance across different tasks (e.g., Richler et al., Psychological Review, 126, 226-251, 2019). Behavioral (e.g., Cooke et al., Neuropsychologia, 45, 484-495, 2007) and neural evidence (e.g., Amedi, Cerebral Cortex, 12, 1202-1212, 2002) suggest overlapping mechanisms in the processing of visual and haptic information in the service of object recognition, but it is unclear whether such group-average results generalize to individual differences. Psychometrically validated measures are required, which have been lacking in the haptic modality. We investigate whether object recognition ability is specific to vision or extends to haptics using psychometric measures we have developed. We use multiple visual and haptic tests with different objects and different formats to measure domain-general visual and haptic abilities and to test for relations across them. We measured object recognition abilities using two visual tests and four haptic tests (two each for two kinds of haptic exploration) in 97 participants. Partial correlation and confirmatory factor analyses converge to support the existence of a domain-general haptic object recognition ability that is moderately correlated with domain-general visual object recognition ability. Visual and haptic abilities share about 25% of their variance, supporting the existence of a multisensory domain-general ability while leaving a substantial amount of residual variance for modality-specific abilities. These results extend our understanding of the structure of object recognition abilities; while there are mechanisms that may generalize across categories, tasks, and modalities, there are still other mechanisms that are distinct between modalities.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":"2148-2159"},"PeriodicalIF":3.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139913351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-02-21DOI: 10.3758/s13423-023-02447-3
Charles Spence, Nicola Di Stefano
The term 'amodal' is a key topic in several different research fields across experimental psychology and cognitive neuroscience, including in the areas of developmental and perception science. However, despite being regularly used in the literature, the term means something different to the researchers working in the different contexts. Many developmental scientists conceive of the term as referring to those perceptual qualities, such as, for example, the size and shape of an object, that can be picked up by multiple senses (e.g., vision and touch potentially providing information relevant to the same physical stimulus/property). However, the amodal label is also widely used in the case of those qualities that are not directly sensory, such as, for example, numerosity, rhythm, synchrony, etc. Cognitive neuroscientists, by contrast, tend to use the term amodal to refer to those central cognitive processes and brain areas that do not appear to be preferentially responsive to a particular sensory modality or to those symbolic or formal representations that essentially lack any modality and that are assumed to play a role in the higher processing of sensory information. Finally, perception scientists sometimes refer to the phenomenon of 'amodal completion', referring to the spontaneous completion of perceptual information that is missing when occluded objects are presented to observers. In this paper, we review the various different ways in which the term 'amodal' has been used in the literature and the evidence supporting the various uses of the term. Morever, we highlight some of the various properties that have been suggested to be 'amodal' over the years. Then, we try to address some of the questions that arise from the reviewed evidence, such as: Do different uses of the 'term' refer to different domains, for example, sensory information, perceptual processes, or perceptual representations? Are there any commonalities among the different uses of the term? To what extent is research on cross-modal associations (or correspondences) related to, or can shed light on, amodality? And how is the notion of amodal related to multisensory integration? Based on the reviewed evidence, it is argued that there is, as yet, no convincing empirical evidence to support the claim that amodal sensory qualities exist. We thus suggest that use of the term amodal would be more meaningful with respect to abstract cognition rather than necessarily sensory perception, the latter being more adequately explained/understood in terms of highly redundant cross-modal correspondences.
{"title":"What, if anything, can be considered an amodal sensory dimension?","authors":"Charles Spence, Nicola Di Stefano","doi":"10.3758/s13423-023-02447-3","DOIUrl":"10.3758/s13423-023-02447-3","url":null,"abstract":"<p><p>The term 'amodal' is a key topic in several different research fields across experimental psychology and cognitive neuroscience, including in the areas of developmental and perception science. However, despite being regularly used in the literature, the term means something different to the researchers working in the different contexts. Many developmental scientists conceive of the term as referring to those perceptual qualities, such as, for example, the size and shape of an object, that can be picked up by multiple senses (e.g., vision and touch potentially providing information relevant to the same physical stimulus/property). However, the amodal label is also widely used in the case of those qualities that are not directly sensory, such as, for example, numerosity, rhythm, synchrony, etc. Cognitive neuroscientists, by contrast, tend to use the term amodal to refer to those central cognitive processes and brain areas that do not appear to be preferentially responsive to a particular sensory modality or to those symbolic or formal representations that essentially lack any modality and that are assumed to play a role in the higher processing of sensory information. Finally, perception scientists sometimes refer to the phenomenon of 'amodal completion', referring to the spontaneous completion of perceptual information that is missing when occluded objects are presented to observers. In this paper, we review the various different ways in which the term 'amodal' has been used in the literature and the evidence supporting the various uses of the term. Morever, we highlight some of the various properties that have been suggested to be 'amodal' over the years. Then, we try to address some of the questions that arise from the reviewed evidence, such as: Do different uses of the 'term' refer to different domains, for example, sensory information, perceptual processes, or perceptual representations? Are there any commonalities among the different uses of the term? To what extent is research on cross-modal associations (or correspondences) related to, or can shed light on, amodality? And how is the notion of amodal related to multisensory integration? Based on the reviewed evidence, it is argued that there is, as yet, no convincing empirical evidence to support the claim that amodal sensory qualities exist. We thus suggest that use of the term amodal would be more meaningful with respect to abstract cognition rather than necessarily sensory perception, the latter being more adequately explained/understood in terms of highly redundant cross-modal correspondences.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":"1915-1933"},"PeriodicalIF":3.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11543734/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139913354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-03-04DOI: 10.3758/s13423-023-02419-7
Silvia Selimi, Christian Frings, Birte Moeller
Action control is hierarchically organized. Multiple consecutive responses can be integrated into an event representation of higher order and can retrieve each other upon repetition, resulting in so-called response-response binding effects. Previous research indicates that the spatial separation of responses can affect how easily they can be cognitively separated. In this study, we introduced a barrier between the responding hands to investigate whether the spatial separation of two responses also influences response-response binding effects. In line with previous research on stimulus-response binding, we expected an increased separability of responses to result in stronger response-response binding effects when responding hands were separated by a barrier. We indeed found stronger response-response binding effects with separated hands. Results indicate that a more distinct representation of individual actions through increased separability might benefit the control of hierarchical actions.
{"title":"Separated hands further response-response binding effects.","authors":"Silvia Selimi, Christian Frings, Birte Moeller","doi":"10.3758/s13423-023-02419-7","DOIUrl":"10.3758/s13423-023-02419-7","url":null,"abstract":"<p><p>Action control is hierarchically organized. Multiple consecutive responses can be integrated into an event representation of higher order and can retrieve each other upon repetition, resulting in so-called response-response binding effects. Previous research indicates that the spatial separation of responses can affect how easily they can be cognitively separated. In this study, we introduced a barrier between the responding hands to investigate whether the spatial separation of two responses also influences response-response binding effects. In line with previous research on stimulus-response binding, we expected an increased separability of responses to result in stronger response-response binding effects when responding hands were separated by a barrier. We indeed found stronger response-response binding effects with separated hands. Results indicate that a more distinct representation of individual actions through increased separability might benefit the control of hierarchical actions.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":"2226-2233"},"PeriodicalIF":3.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11543708/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140022450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.3758/s13423-024-02587-0
Peter D Kvam
Theories of dynamic decision-making are typically built on evidence accumulation, which is modeled using racing accumulators or diffusion models that track a shifting balance of support over time. However, these two types of models are only two special cases of a more general evidence accumulation process where options correspond to directions in an accumulation space. Using this generalized evidence accumulation approach as a starting point, I identify four ways to discriminate between absolute-evidence and relative-evidence models. First, an experimenter can look at the information that decision-makers considered to identify whether there is a filtering of near-zero evidence samples, which is characteristic of a relative-evidence decision rule (e.g., diffusion decision model). Second, an experimenter can disentangle different components of drift rates by manipulating the discriminability of the two response options relative to the stimulus to delineate the balance of evidence from the total amount of evidence. Third, a modeler can use machine learning to classify a set of data according to its generative model. Finally, machine learning can also be used to directly estimate the geometric relationships between choice options. I illustrate these different approaches by applying them to data from an orientation-discrimination task, showing converging conclusions across all four methods in favor of accumulator-based representations of evidence during choice. These tools can clearly delineate absolute-evidence and relative-evidence models, and should be useful for comparing many other types of decision theories.
{"title":"The Tweedledum and Tweedledee of dynamic decisions: Discriminating between diffusion decision and accumulator models.","authors":"Peter D Kvam","doi":"10.3758/s13423-024-02587-0","DOIUrl":"https://doi.org/10.3758/s13423-024-02587-0","url":null,"abstract":"<p><p>Theories of dynamic decision-making are typically built on evidence accumulation, which is modeled using racing accumulators or diffusion models that track a shifting balance of support over time. However, these two types of models are only two special cases of a more general evidence accumulation process where options correspond to directions in an accumulation space. Using this generalized evidence accumulation approach as a starting point, I identify four ways to discriminate between absolute-evidence and relative-evidence models. First, an experimenter can look at the information that decision-makers considered to identify whether there is a filtering of near-zero evidence samples, which is characteristic of a relative-evidence decision rule (e.g., diffusion decision model). Second, an experimenter can disentangle different components of drift rates by manipulating the discriminability of the two response options relative to the stimulus to delineate the balance of evidence from the total amount of evidence. Third, a modeler can use machine learning to classify a set of data according to its generative model. Finally, machine learning can also be used to directly estimate the geometric relationships between choice options. I illustrate these different approaches by applying them to data from an orientation-discrimination task, showing converging conclusions across all four methods in favor of accumulator-based representations of evidence during choice. These tools can clearly delineate absolute-evidence and relative-evidence models, and should be useful for comparing many other types of decision theories.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Serial dependence (SD) is a phenomenon wherein current perceptions are biased by the previous stimulus and response. This helps to attenuate perceptual noise and variability in sensory input and facilitates stable ongoing perceptions of the environment. However, little is known about the developmental trajectory of SD. This study investigates how the stimulus and response biases of the SD effect develop across three age groups. Conventional analyses, in which previous stimulus and response biases were assessed separately, revealed significant changes in the biases over time. Previous stimulus bias shifted from repulsion to attraction, while previous response bias evolved from attraction to greater attraction. However, there was a strong correlation between stimulus and response orientations. Therefore, a generalized linear mixed-effects (GLME) analysis that simultaneously considered both previous stimulus and response, outperformed separate analyses. This revealed that previous stimulus and response resulted in two distinct biases with different developmental trajectories. The repulsion bias of previous stimulus remained relatively stable across all age groups, whereas the attraction bias of previous response was significantly stronger in adults than in children and adolescents. These findings demonstrate that the repulsion bias towards preceding stimuli is established early in the developing brain (at least by around 10 years old), while the attraction bias towards responses is not fully developed until adulthood. Our findings provide new insights into the development of the SD phenomenon and how humans integrate two opposing mechanisms into their perceptual responses to external input during development.
{"title":"The distinct development of stimulus and response serial dependence.","authors":"Liqin Zhou, Yujie Liu, Yuhan Jiang, Wenbo Wang, Pengfei Xu, Ke Zhou","doi":"10.3758/s13423-024-02474-8","DOIUrl":"10.3758/s13423-024-02474-8","url":null,"abstract":"<p><p>Serial dependence (SD) is a phenomenon wherein current perceptions are biased by the previous stimulus and response. This helps to attenuate perceptual noise and variability in sensory input and facilitates stable ongoing perceptions of the environment. However, little is known about the developmental trajectory of SD. This study investigates how the stimulus and response biases of the SD effect develop across three age groups. Conventional analyses, in which previous stimulus and response biases were assessed separately, revealed significant changes in the biases over time. Previous stimulus bias shifted from repulsion to attraction, while previous response bias evolved from attraction to greater attraction. However, there was a strong correlation between stimulus and response orientations. Therefore, a generalized linear mixed-effects (GLME) analysis that simultaneously considered both previous stimulus and response, outperformed separate analyses. This revealed that previous stimulus and response resulted in two distinct biases with different developmental trajectories. The repulsion bias of previous stimulus remained relatively stable across all age groups, whereas the attraction bias of previous response was significantly stronger in adults than in children and adolescents. These findings demonstrate that the repulsion bias towards preceding stimuli is established early in the developing brain (at least by around 10 years old), while the attraction bias towards responses is not fully developed until adulthood. Our findings provide new insights into the development of the SD phenomenon and how humans integrate two opposing mechanisms into their perceptual responses to external input during development.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":"2137-2147"},"PeriodicalIF":3.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11543724/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139913353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-03-01DOI: 10.3758/s13423-024-02477-5
Matteo P Lisi, Martina Fusaro, Salvatore Maria Aglioti
We conducted a systematic review investigating the influence of visual perspective and body ownership (BO) on vicarious brain resonance and vicarious sensations during the observation of pain and touch. Indeed, the way in which brain reactivity and the phenomenological experience can be modulated by blurring the bodily boundaries of self-other distinction is still unclear. We screened Scopus and WebOfScience, and identified 31 articles, published from 2000 to 2022. Results show that assuming an egocentric perspective enhances vicarious resonance and vicarious sensations. Studies on synaesthetes suggest that vicarious conscious experiences are associated with an increased tendency to embody fake body parts, even in the absence of congruent multisensory stimulation. Moreover, immersive virtual reality studies show that the type of embodied virtual body can affect high-order sensations such as appropriateness, unpleasantness, and erogeneity, associated with the touched body part and the toucher's social identity. We conclude that perspective plays a key role in the resonance with others' pain and touch, and full-BO over virtual avatars allows investigation of complex aspects of pain and touch perception which would not be possible in reality.
{"title":"Visual perspective and body ownership modulate vicarious pain and touch: A systematic review.","authors":"Matteo P Lisi, Martina Fusaro, Salvatore Maria Aglioti","doi":"10.3758/s13423-024-02477-5","DOIUrl":"10.3758/s13423-024-02477-5","url":null,"abstract":"<p><p>We conducted a systematic review investigating the influence of visual perspective and body ownership (BO) on vicarious brain resonance and vicarious sensations during the observation of pain and touch. Indeed, the way in which brain reactivity and the phenomenological experience can be modulated by blurring the bodily boundaries of self-other distinction is still unclear. We screened Scopus and WebOfScience, and identified 31 articles, published from 2000 to 2022. Results show that assuming an egocentric perspective enhances vicarious resonance and vicarious sensations. Studies on synaesthetes suggest that vicarious conscious experiences are associated with an increased tendency to embody fake body parts, even in the absence of congruent multisensory stimulation. Moreover, immersive virtual reality studies show that the type of embodied virtual body can affect high-order sensations such as appropriateness, unpleasantness, and erogeneity, associated with the touched body part and the toucher's social identity. We conclude that perspective plays a key role in the resonance with others' pain and touch, and full-BO over virtual avatars allows investigation of complex aspects of pain and touch perception which would not be possible in reality.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":"1954-1980"},"PeriodicalIF":3.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11543731/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140013238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-03-04DOI: 10.3758/s13423-024-02484-6
Gaeun Son, Dirk B Walther, Michael L Mack
The formation of categories is known to distort perceptual space: representations are pushed away from category boundaries and pulled toward categorical prototypes. This phenomenon has been studied with artificially constructed objects, whose feature dimensions are easily defined and manipulated. How such category-induced perceptual distortions arise for complex, real-world scenes, however, remains largely unknown due to the technical challenge of measuring and controlling scene features. We address this question by generating realistic scene images from a high-dimensional continuous space using generative adversarial networks and using the images as stimuli in a novel learning task. Participants learned to categorize the scene images along arbitrary category boundaries and later reconstructed the same scenes from memory. Systematic biases in reconstruction errors closely tracked each participant's subjective category boundaries. These findings suggest that the perception of global scene properties is warped to align with a newly learned category structure after only a brief learning experience.
{"title":"Brief category learning distorts perceptual space for complex scenes.","authors":"Gaeun Son, Dirk B Walther, Michael L Mack","doi":"10.3758/s13423-024-02484-6","DOIUrl":"10.3758/s13423-024-02484-6","url":null,"abstract":"<p><p>The formation of categories is known to distort perceptual space: representations are pushed away from category boundaries and pulled toward categorical prototypes. This phenomenon has been studied with artificially constructed objects, whose feature dimensions are easily defined and manipulated. How such category-induced perceptual distortions arise for complex, real-world scenes, however, remains largely unknown due to the technical challenge of measuring and controlling scene features. We address this question by generating realistic scene images from a high-dimensional continuous space using generative adversarial networks and using the images as stimuli in a novel learning task. Participants learned to categorize the scene images along arbitrary category boundaries and later reconstructed the same scenes from memory. Systematic biases in reconstruction errors closely tracked each participant's subjective category boundaries. These findings suggest that the perception of global scene properties is warped to align with a newly learned category structure after only a brief learning experience.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":"2234-2248"},"PeriodicalIF":3.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140028749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-03-11DOI: 10.3758/s13423-024-02480-w
Cathy Marlair, Aliette Lochy, Virginie Crollen
While humans can readily access the common magnitude of various codes such as digits, number words, or dot sets, it remains unclear whether this process occurs automatically, or only when explicitly attending to magnitude information. We addressed this question by examining the neural distance effect, a robust marker of magnitude processing, with a frequency-tagging approach. Electrophysiological responses were recorded while participants viewed rapid sequences of a base numerosity presented at 6 Hz (e.g., "2") in randomly mixed codes: digits, number words, canonical dot, and finger configurations. A deviant numerosity either close (e.g., "3") or distant (e.g., "8") from the base was inserted every five items. Participants were instructed to focus their attention either on the magnitude number feature (from a previous study), the parity number feature, a nonnumerical color feature or no specific feature. In the four attentional conditions, we found clear discrimination responses of the deviant numerosity despite its code variation. Critically, the distance effect (larger responses when base/deviant are distant than close) was present when participants were explicitly attending to magnitude and parity, but it faded with color and simple viewing instructions. Taken together, these results suggest automatic access to an abstract number representation but highlight the role of selective attention in processing the underlying magnitude information. This study therefore provides insights into how attention can modulate the neural activity supporting abstract magnitude processing.
{"title":"Frequency-tagging EEG reveals the effect of attentional focus on abstract magnitude processing.","authors":"Cathy Marlair, Aliette Lochy, Virginie Crollen","doi":"10.3758/s13423-024-02480-w","DOIUrl":"10.3758/s13423-024-02480-w","url":null,"abstract":"<p><p>While humans can readily access the common magnitude of various codes such as digits, number words, or dot sets, it remains unclear whether this process occurs automatically, or only when explicitly attending to magnitude information. We addressed this question by examining the neural distance effect, a robust marker of magnitude processing, with a frequency-tagging approach. Electrophysiological responses were recorded while participants viewed rapid sequences of a base numerosity presented at 6 Hz (e.g., \"2\") in randomly mixed codes: digits, number words, canonical dot, and finger configurations. A deviant numerosity either close (e.g., \"3\") or distant (e.g., \"8\") from the base was inserted every five items. Participants were instructed to focus their attention either on the magnitude number feature (from a previous study), the parity number feature, a nonnumerical color feature or no specific feature. In the four attentional conditions, we found clear discrimination responses of the deviant numerosity despite its code variation. Critically, the distance effect (larger responses when base/deviant are distant than close) was present when participants were explicitly attending to magnitude and parity, but it faded with color and simple viewing instructions. Taken together, these results suggest automatic access to an abstract number representation but highlight the role of selective attention in processing the underlying magnitude information. This study therefore provides insights into how attention can modulate the neural activity supporting abstract magnitude processing.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":"2266-2274"},"PeriodicalIF":3.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140102337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-03-22DOI: 10.3758/s13423-024-02492-6
Jiehui Qian, Bingxue Fu, Ziqi Gao, Bowen Tan
Recent studies have examined whether the internal selection mechanism functions similarly for perception and visual working memory (VWM). However, the process of how we access and manipulate object representations distributed in a 3D space remains unclear. In this study, we utilized a memory search task to investigate the effect of depth on object selection and manipulation within VWM. The memory display consisted of colored items half positioned at the near depth plane and the other half at the far plane. During memory maintenance, the participants were instructed to search for a target representation and update its color. The results showed that under object-based attention (Experiments 1, 3, and 5), the update time was faster for targets at the near plane than for those at the far plane. This effect was absent in VWM when deploying spatial attention (Experiment 2) and in visual search regardless of the type of attention deployed (Experiment 4). The differential effects of depth on spatial and object-based attention in VWM suggest that spatial attention primarily relied on 2D location information irrespective of depth, whereas object-based attention seemed to prioritize memory representations at the front plane before shifting to the back. Our findings shed light on the interaction between depth perception and the selection mechanisms within VWM in a 3D context, emphasizing the importance of ordinal, rather than metric, spatial information in guiding object-based attention in VWM.
{"title":"The influence of depth on object selection and manipulation in visual working memory within a 3D context.","authors":"Jiehui Qian, Bingxue Fu, Ziqi Gao, Bowen Tan","doi":"10.3758/s13423-024-02492-6","DOIUrl":"10.3758/s13423-024-02492-6","url":null,"abstract":"<p><p>Recent studies have examined whether the internal selection mechanism functions similarly for perception and visual working memory (VWM). However, the process of how we access and manipulate object representations distributed in a 3D space remains unclear. In this study, we utilized a memory search task to investigate the effect of depth on object selection and manipulation within VWM. The memory display consisted of colored items half positioned at the near depth plane and the other half at the far plane. During memory maintenance, the participants were instructed to search for a target representation and update its color. The results showed that under object-based attention (Experiments 1, 3, and 5), the update time was faster for targets at the near plane than for those at the far plane. This effect was absent in VWM when deploying spatial attention (Experiment 2) and in visual search regardless of the type of attention deployed (Experiment 4). The differential effects of depth on spatial and object-based attention in VWM suggest that spatial attention primarily relied on 2D location information irrespective of depth, whereas object-based attention seemed to prioritize memory representations at the front plane before shifting to the back. Our findings shed light on the interaction between depth perception and the selection mechanisms within VWM in a 3D context, emphasizing the importance of ordinal, rather than metric, spatial information in guiding object-based attention in VWM.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":"2293-2304"},"PeriodicalIF":3.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140194436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}