Cultural difference in ensemble emotion perception is an important research question, providing insights into the complexity of human cognition and social interaction. Here, we conducted two experiments to investigate how emotion perception would be affected by other ethnicity effects and ensemble coding. In Experiment 1, two groups of Asian and Caucasian participants were tasked with assessing the average emotion of faces from their ethnic group, other ethnic group, and mixed ethnicity groups. Results revealed that participants exhibited relatively accurate yet amplified emotion perception of their group faces, with a tendency to overestimate the weight of the faces from the other ethnic group. In Experiment 2, Asian participants were instructed to discern the emotion of a target face surrounded by faces from Caucasian and Asian faces. Results corroborated earlier findings, indicating that while participants accurately perceived emotions in faces of their ethnicity, their perception of Caucasian faces was noticeably influenced by the presence of surrounding Asian faces. These findings collectively support the notion that the other ethnicity effect stems from differential emotional amplification inherent in ensemble coding of emotion perception.
{"title":"Other ethnicity effects in ensemble coding of facial expressions","authors":"Zhenhua Zhao, Kelun Yaoma, Yujie Wu, Edwin Burns, Mengdan Sun, Haojiang Ying","doi":"10.3758/s13414-024-02920-8","DOIUrl":"10.3758/s13414-024-02920-8","url":null,"abstract":"<div><p>Cultural difference in ensemble emotion perception is an important research question, providing insights into the complexity of human cognition and social interaction. Here, we conducted two experiments to investigate how emotion perception would be affected by other ethnicity effects and ensemble coding. In Experiment 1, two groups of Asian and Caucasian participants were tasked with assessing the average emotion of faces from their ethnic group, other ethnic group, and mixed ethnicity groups. Results revealed that participants exhibited relatively accurate yet amplified emotion perception of their group faces, with a tendency to overestimate the weight of the faces from the other ethnic group. In Experiment 2, Asian participants were instructed to discern the emotion of a target face surrounded by faces from Caucasian and Asian faces. Results corroborated earlier findings, indicating that while participants accurately perceived emotions in faces of their ethnicity, their perception of Caucasian faces was noticeably influenced by the presence of surrounding Asian faces. These findings collectively support the notion that the <i>other ethnicity effect</i> stems from differential emotional amplification inherent in ensemble coding of emotion perception.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 7","pages":"2412 - 2423"},"PeriodicalIF":1.7,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141592155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-11DOI: 10.3758/s13414-024-02914-6
Nicola J. Morton, Matt Grice, Simon Kemp, Randolph C. Grace
The ratio of two magnitudes can take one of two values depending on the order they are operated on: a ‘big’ ratio of the larger to smaller magnitude, or a ‘small’ ratio of the smaller to larger. Although big and small ratio scales have different metric properties and carry divergent predictions for perceptual comparison tasks, no psychophysical studies have directly compared them. Two experiments are reported in which subjects implicitly learned to compare pairs of brightnesses and line lengths by non-symbolic feedback based on the scaled big ratio, small ratio or difference of the magnitudes presented. Results of Experiment 1 showed all three operations were learned quickly and estimated with a high degree of accuracy that did not significantly differ across groups or between intensive and extensive modalities, though regressions on individual data suggested an overall predisposition towards differences. Experiment 2 tested whether subjects learned to estimate the operation trained or to associate stimulus pairs with correct responses. For each operation, Gaussian noise was added to the feedback that was constant for repetitions of each pair. For all subjects, coefficients for the added noise component were negative when entered in a regression model alongside the trained differences or ratios, and were statistically significant in 80% of individual cases. Thus, subjects learned to estimate the comparative operations and effectively ignored or suppressed the added noise. These results suggest the perceptual system is highly flexible in its capacity for non-symbolic computation, which may reflect a deeper connection between perceptual structure and mathematics.
{"title":"Non-symbolic estimation of big and small ratios with accurate and noisy feedback","authors":"Nicola J. Morton, Matt Grice, Simon Kemp, Randolph C. Grace","doi":"10.3758/s13414-024-02914-6","DOIUrl":"10.3758/s13414-024-02914-6","url":null,"abstract":"<div><p>The ratio of two magnitudes can take one of two values depending on the order they are operated on: a ‘big’ ratio of the larger to smaller magnitude, or a ‘small’ ratio of the smaller to larger. Although big and small ratio scales have different metric properties and carry divergent predictions for perceptual comparison tasks, no psychophysical studies have directly compared them. Two experiments are reported in which subjects implicitly learned to compare pairs of brightnesses and line lengths by non-symbolic feedback based on the scaled big ratio, small ratio or difference of the magnitudes presented. Results of Experiment 1 showed all three operations were learned quickly and estimated with a high degree of accuracy that did not significantly differ across groups or between intensive and extensive modalities, though regressions on individual data suggested an overall predisposition towards differences. Experiment 2 tested whether subjects learned to estimate the operation trained or to associate stimulus pairs with correct responses. For each operation, Gaussian noise was added to the feedback that was constant for repetitions of each pair. For all subjects, coefficients for the added noise component were negative when entered in a regression model alongside the trained differences or ratios, and were statistically significant in 80% of individual cases. Thus, subjects learned to estimate the comparative operations and effectively ignored or suppressed the added noise. These results suggest the perceptual system is highly flexible in its capacity for non-symbolic computation, which may reflect a deeper connection between perceptual structure and mathematics.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 6","pages":"2169 - 2186"},"PeriodicalIF":1.7,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11410853/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141592154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-11DOI: 10.3758/s13414-024-02929-z
Zi-Xi Luo, Wang-Nan Pan, Xiang-Jun Zeng, Liang-Yu Gong, Yong-Chun Cai
There has been enduring debate on how attention alters contrast appearance. Recent research indicates that exogenous attention enhances contrast appearance for low-contrast stimuli but attenuates it for high-contrast stimuli. Similarly, one study has demonstrated that endogenous attention heightens perceived contrast for low-contrast stimuli, yet none have explored its impact on high-contrast stimuli. In this study, we investigated how endogenous attention alters contrast appearance, with a specific focus on high-contrast stimuli. In Experiment 1, we utilized the rapid serial visual presentation (RSVP) paradigm to direct endogenous attention, revealing that contrast appearance was enhanced for both low- and high-contrast stimuli. To eliminate potential influences from the confined attention field in the RSVP paradigm, Experiment 2 adopted the letter identification paradigm, deploying attention across a broader visual field. Results consistently indicated that endogenous attention increased perceived contrast for high-contrast stimuli. Experiment 3 employed equiluminant chromatic letters as stimuli in the letter identification task to eliminate potential interference from contrast adaption, which might have occurred in Experiment 2. Remarkably, the boosting effect of endogenous attention persisted. Combining the results from these experiments, we propose that endogenous attention consistently enhances contrast appearance, irrespective of stimulus contrast levels. This stands in contrast to the effects of exogenous attention, suggesting that mechanisms through which endogenous attention alters contrast appearance may differ from those of exogenous attention.
{"title":"Endogenous attention enhances contrast appearance regardless of stimulus contrast","authors":"Zi-Xi Luo, Wang-Nan Pan, Xiang-Jun Zeng, Liang-Yu Gong, Yong-Chun Cai","doi":"10.3758/s13414-024-02929-z","DOIUrl":"10.3758/s13414-024-02929-z","url":null,"abstract":"<div><p>There has been enduring debate on how attention alters contrast appearance. Recent research indicates that exogenous attention enhances contrast appearance for low-contrast stimuli but attenuates it for high-contrast stimuli. Similarly, one study has demonstrated that endogenous attention heightens perceived contrast for low-contrast stimuli, yet none have explored its impact on high-contrast stimuli. In this study, we investigated how endogenous attention alters contrast appearance, with a specific focus on high-contrast stimuli. In Experiment 1, we utilized the rapid serial visual presentation (RSVP) paradigm to direct endogenous attention, revealing that contrast appearance was enhanced for both low- and high-contrast stimuli. To eliminate potential influences from the confined attention field in the RSVP paradigm, Experiment 2 adopted the letter identification paradigm, deploying attention across a broader visual field. Results consistently indicated that endogenous attention increased perceived contrast for high-contrast stimuli. Experiment 3 employed equiluminant chromatic letters as stimuli in the letter identification task to eliminate potential interference from contrast adaption, which might have occurred in Experiment 2. Remarkably, the boosting effect of endogenous attention persisted. Combining the results from these experiments, we propose that endogenous attention consistently enhances contrast appearance, irrespective of stimulus contrast levels. This stands in contrast to the effects of exogenous attention, suggesting that mechanisms through which endogenous attention alters contrast appearance may differ from those of exogenous attention.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 6","pages":"1883 - 1896"},"PeriodicalIF":1.7,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141592151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-09DOI: 10.3758/s13414-024-02921-7
Gregory Davis
Conventional visual search tasks do not address attention directly and their core manipulation of ‘set size’ – the number of displayed items – introduces stimulus confounds that hinder interpretation. However, alternative approaches have not been widely adopted, perhaps reflecting their complexity, assumptions, or indirect attention-sampling. Here, a new procedure, the ATtention Location And Size (‘ATLAS’) task used probe displays to track attention’s location, breadth, and guidance during search. Though most probe displays comprised six items, participants reported only the single item they judged themselves to have perceived most clearly – indexing the attention ‘peak’. By sampling peaks across variable ‘choice sets’, the size and position of the attention window during search was profiled. These indices appeared to distinguish narrow- from broad attention, signalled attention to pairs of items where it arose and tracked evolving attention-guidance over time. ATLAS is designed to discriminate five key search modes: serial-unguided, sequential-guided, unguided attention to ‘clumps’ with local guidance, and broad parallel-attention with or without guidance. This initial investigation used only an example set of highly regular stimuli, but its broader potential should be investigated.
{"title":"ATLAS: Mapping ATtention’s Location And Size to probe five modes of serial and parallel search","authors":"Gregory Davis","doi":"10.3758/s13414-024-02921-7","DOIUrl":"10.3758/s13414-024-02921-7","url":null,"abstract":"<div><p>Conventional visual search tasks do not address attention directly and their core manipulation of ‘set size’ – the number of displayed items – introduces stimulus confounds that hinder interpretation. However, alternative approaches have not been widely adopted, perhaps reflecting their complexity, assumptions, or indirect attention-sampling. Here, a new procedure, the ATtention Location And Size (‘ATLAS’) task used probe displays to track attention’s location, breadth, and guidance during search. Though most probe displays comprised six items, participants reported only the single item they judged themselves to have perceived most clearly – indexing the attention ‘peak’. By sampling peaks across variable ‘choice sets’, the size and position of the attention window during search was profiled. These indices appeared to distinguish narrow- from broad attention, signalled attention to pairs of items where it arose and tracked evolving attention-guidance over time. ATLAS is designed to discriminate five key search modes: serial-unguided, sequential-guided, unguided attention to ‘clumps’ with local guidance, and broad parallel-attention with or without guidance. This initial investigation used only an example set of highly regular stimuli, but its broader potential should be investigated.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 6","pages":"1938 - 1962"},"PeriodicalIF":1.7,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11410986/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141565194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-09DOI: 10.3758/s13414-024-02924-4
Arryn Robbins, Anatolii Evdokimov
Categorical search involves looking for objects based on category information from long-term memory. Previous research has shown that search efficiency in categorical search is influenced by target/distractor similarity and category variability (i.e., heterogeneity). However, the interaction between these factors and their impact on different subprocesses of search remains unclear. This study examined the effects of target/distractor similarity and category variability on processes of categorical search. Using multidimensional scaling, we manipulated target/distractor similarity and measured category variability for target categories that participants searched for. Eye-tracking data were collected to examine attentional guidance and target verification. The results demonstrated that the effect of category variability on response times (RTs) was dependent on the level of target/distractor similarity. Specifically, when distractors were highly similar to target categories, there was a negative relation between RTs and variability, with low variability categories producing longer RTs than higher variability categories. Surprisingly, this trend was only present in the eye-tracking measures of target verification but not attentional guidance. Our results suggest that searchers more effectively guide attention to low-variability categories compared to high-variability categories, regardless of the degree of similarity between targets and distractors. However, low category variability interferes with target match decisions when distractors are highly similar to the category, thus the advantage that low category variability provides to searchers is not equal across processes of search.
{"title":"Distractor similarity and category variability effects in search","authors":"Arryn Robbins, Anatolii Evdokimov","doi":"10.3758/s13414-024-02924-4","DOIUrl":"10.3758/s13414-024-02924-4","url":null,"abstract":"<div><p>Categorical search involves looking for objects based on category information from long-term memory. Previous research has shown that search efficiency in categorical search is influenced by target/distractor similarity and category variability (i.e., heterogeneity). However, the interaction between these factors and their impact on different subprocesses of search remains unclear. This study examined the effects of target/distractor similarity and category variability on processes of categorical search. Using multidimensional scaling, we manipulated target/distractor similarity and measured category variability for target categories that participants searched for. Eye-tracking data were collected to examine attentional guidance and target verification. The results demonstrated that the effect of category variability on response times (RTs) was dependent on the level of target/distractor similarity. Specifically, when distractors were highly similar to target categories, there was a negative relation between RTs and variability, with low variability categories producing longer RTs than higher variability categories. Surprisingly, this trend was only present in the eye-tracking measures of target verification but not attentional guidance. Our results suggest that searchers more effectively guide attention to low-variability categories compared to high-variability categories, regardless of the degree of similarity between targets and distractors. However, low category variability interferes with target match decisions when distractors are highly similar to the category, thus the advantage that low category variability provides to searchers is not equal across processes of search.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 7","pages":"2231 - 2250"},"PeriodicalIF":1.7,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-024-02924-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141565195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-08DOI: 10.3758/s13414-024-02919-1
Mark W Becker, Andrew Rodriguez, Jeffrey Bolkhovsky, Chad Peltier, Sylvia B Guillory
The low-prevalence effect (LPE) is the finding that target detection rates decline as targets become less frequent in a visual search task. A major source of this effect is thought to be that fewer targets result in lower quitting thresholds, i.e., observers respond target-absent after looking at fewer items compared to searches with a higher prevalence of targets. However, a lower quitting threshold does not directly account for an LPE in searches where observers continuously monitor a dynamic display for targets. In these tasks there are no discrete "trials" to which a quitting threshold could be applied. This study examines whether the LPE persists in this type of dynamic search context. Experiment 1 was a 2 (dynamic/static) x 2 (10%/40% prevalence targets) design. Although overall performance was worse in the dynamic task, both tasks showed a similar magnitude LPE. In Experiment 2, we replicated this effect using a task where subjects searched for either of two targets (Ts and Ls). One target appeared infrequently (10%) and the other moderately (40%). Given this method of manipulating prevalence rate, the quitting threshold explanation does not account for the LPE even for static displays. However, replicating Experiment 1, we found an LPE of similar magnitude for both search scenarios, and lower target detection rates with the dynamic displays, demonstrating the LPE is a potential concern for both static and dynamic searches. These findings suggest an activation threshold explanation of the LPE may better account for our observations than the traditional quitting threshold model.
{"title":"Activation thresholds, not quitting thresholds, account for the low prevalence effect in dynamic search.","authors":"Mark W Becker, Andrew Rodriguez, Jeffrey Bolkhovsky, Chad Peltier, Sylvia B Guillory","doi":"10.3758/s13414-024-02919-1","DOIUrl":"https://doi.org/10.3758/s13414-024-02919-1","url":null,"abstract":"<p><p>The low-prevalence effect (LPE) is the finding that target detection rates decline as targets become less frequent in a visual search task. A major source of this effect is thought to be that fewer targets result in lower quitting thresholds, i.e., observers respond target-absent after looking at fewer items compared to searches with a higher prevalence of targets. However, a lower quitting threshold does not directly account for an LPE in searches where observers continuously monitor a dynamic display for targets. In these tasks there are no discrete \"trials\" to which a quitting threshold could be applied. This study examines whether the LPE persists in this type of dynamic search context. Experiment 1 was a 2 (dynamic/static) x 2 (10%/40% prevalence targets) design. Although overall performance was worse in the dynamic task, both tasks showed a similar magnitude LPE. In Experiment 2, we replicated this effect using a task where subjects searched for either of two targets (Ts and Ls). One target appeared infrequently (10%) and the other moderately (40%). Given this method of manipulating prevalence rate, the quitting threshold explanation does not account for the LPE even for static displays. However, replicating Experiment 1, we found an LPE of similar magnitude for both search scenarios, and lower target detection rates with the dynamic displays, demonstrating the LPE is a potential concern for both static and dynamic searches. These findings suggest an activation threshold explanation of the LPE may better account for our observations than the traditional quitting threshold model.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141560429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-08DOI: 10.3758/s13414-024-02917-3
Debora Nolte, Marc Vidal De Palol, Ashima Keshava, John Madrid-Carvajal, Anna L Gert, Eva-Marie von Butler, Pelin Kömürlüoğlu, Peter König
Extensive research conducted in controlled laboratory settings has prompted an inquiry into how results can be generalized to real-world situations influenced by the subjects' actions. Virtual reality lends itself ideally to investigating complex situations but requires accurate classification of eye movements, especially when combining it with time-sensitive data such as EEG. We recorded eye-tracking data in virtual reality and classified it into gazes and saccades using a velocity-based classification algorithm, and we cut the continuous data into smaller segments to deal with varying noise levels, as introduced in the REMoDNav algorithm. Furthermore, we corrected for participants' translational movement in virtual reality. Various measures, including visual inspection, event durations, and the velocity and dispersion distributions before and after gaze onset, indicate that we can accurately classify the continuous, free-exploration data. Combining the classified eye-tracking with the EEG data, we generated fixation-onset event-related potentials (ERPs) and event-related spectral perturbations (ERSPs), providing further evidence for the quality of the eye-movement classification and timing of the onset of events. Finally, investigating the correlation between single trials and the average ERP and ERSP identified that fixation-onset ERSPs are less time sensitive, require fewer repetitions of the same behavior, and are potentially better suited to study EEG signatures in naturalistic settings. We modified, designed, and tested an algorithm that allows the combination of EEG and eye-tracking data recorded in virtual reality.
{"title":"Combining EEG and eye-tracking in virtual reality: Obtaining fixation-onset event-related potentials and event-related spectral perturbations.","authors":"Debora Nolte, Marc Vidal De Palol, Ashima Keshava, John Madrid-Carvajal, Anna L Gert, Eva-Marie von Butler, Pelin Kömürlüoğlu, Peter König","doi":"10.3758/s13414-024-02917-3","DOIUrl":"https://doi.org/10.3758/s13414-024-02917-3","url":null,"abstract":"<p><p>Extensive research conducted in controlled laboratory settings has prompted an inquiry into how results can be generalized to real-world situations influenced by the subjects' actions. Virtual reality lends itself ideally to investigating complex situations but requires accurate classification of eye movements, especially when combining it with time-sensitive data such as EEG. We recorded eye-tracking data in virtual reality and classified it into gazes and saccades using a velocity-based classification algorithm, and we cut the continuous data into smaller segments to deal with varying noise levels, as introduced in the REMoDNav algorithm. Furthermore, we corrected for participants' translational movement in virtual reality. Various measures, including visual inspection, event durations, and the velocity and dispersion distributions before and after gaze onset, indicate that we can accurately classify the continuous, free-exploration data. Combining the classified eye-tracking with the EEG data, we generated fixation-onset event-related potentials (ERPs) and event-related spectral perturbations (ERSPs), providing further evidence for the quality of the eye-movement classification and timing of the onset of events. Finally, investigating the correlation between single trials and the average ERP and ERSP identified that fixation-onset ERSPs are less time sensitive, require fewer repetitions of the same behavior, and are potentially better suited to study EEG signatures in naturalistic settings. We modified, designed, and tested an algorithm that allows the combination of EEG and eye-tracking data recorded in virtual reality.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141560430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.3758/s13414-024-02904-8
Xin Huang, Brian W L Wong, Hezul Tin-Yan Ng, Werner Sommer, Olaf Dimigen, Urs Maurer
Two classic experimental paradigms - masked repetition priming and the boundary paradigm - have played a pivotal role in understanding the process of visual word recognition. Traditionally, these paradigms have been employed by different communities of researchers, with their own long-standing research traditions. Nevertheless, a review of the literature suggests that the brain-electric correlates of word processing established with both paradigms may show interesting similarities, in particular with regard to the location, timing, and direction of N1 and N250 effects. However, as of yet, no direct comparison has been undertaken between the two paradigms. In the current study, we used combined eye-tracking/EEG to perform such a within-subject comparison using the same materials (single Chinese characters) as stimuli. To facilitate direct comparisons, we used a simplified version of the boundary paradigm - the single word boundary paradigm. Our results show the typical early repetition effects of N1 and N250 for both paradigms. However, repetition effects in N250 (i.e., a reduced negativity following identical-word primes/previews as compared to different-word primes/previews) were larger with the single word boundary paradigm than with masked priming. For N1 effects, repetition effects were similar across the two paradigms, showing a larger N1 after repetitions as compared to alternations. Therefore, the results indicate that at the neural level, a briefly presented and masked foveal prime produces qualitatively similar facilitatory effects on visual word recognition as a parafoveal preview before a single saccade, although such effects appear to be stronger in the latter case.
{"title":"Neural mechanism underlying preview effects and masked priming effects in visual word processing.","authors":"Xin Huang, Brian W L Wong, Hezul Tin-Yan Ng, Werner Sommer, Olaf Dimigen, Urs Maurer","doi":"10.3758/s13414-024-02904-8","DOIUrl":"https://doi.org/10.3758/s13414-024-02904-8","url":null,"abstract":"<p><p>Two classic experimental paradigms - masked repetition priming and the boundary paradigm - have played a pivotal role in understanding the process of visual word recognition. Traditionally, these paradigms have been employed by different communities of researchers, with their own long-standing research traditions. Nevertheless, a review of the literature suggests that the brain-electric correlates of word processing established with both paradigms may show interesting similarities, in particular with regard to the location, timing, and direction of N1 and N250 effects. However, as of yet, no direct comparison has been undertaken between the two paradigms. In the current study, we used combined eye-tracking/EEG to perform such a within-subject comparison using the same materials (single Chinese characters) as stimuli. To facilitate direct comparisons, we used a simplified version of the boundary paradigm - the single word boundary paradigm. Our results show the typical early repetition effects of N1 and N250 for both paradigms. However, repetition effects in N250 (i.e., a reduced negativity following identical-word primes/previews as compared to different-word primes/previews) were larger with the single word boundary paradigm than with masked priming. For N1 effects, repetition effects were similar across the two paradigms, showing a larger N1 after repetitions as compared to alternations. Therefore, the results indicate that at the neural level, a briefly presented and masked foveal prime produces qualitatively similar facilitatory effects on visual word recognition as a parafoveal preview before a single saccade, although such effects appear to be stronger in the latter case.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.3758/s13414-024-02916-4
David Melcher, Ani Alaberkyan, Chrysi Anastasaki, Xiaoyi Liu, Michele Deodato, Gianluca Marsicano, Diogo Almeida
A key aspect of efficient visual processing is to use current and previous information to make predictions about what we will see next. In natural viewing, and when looking at words, there is typically an indication of forthcoming visual information from extrafoveal areas of the visual field before we make an eye movement to an object or word of interest. This "preview effect" has been studied for many years in the word reading literature and, more recently, in object perception. Here, we integrated methods from word recognition and object perception to investigate the timing of the preview on neural measures of word recognition. Through a combined use of EEG and eye-tracking, a group of multilingual participants took part in a gaze-contingent, single-shot saccade experiment in which words appeared in their parafoveal visual field. In valid preview trials, the same word was presented during the preview and after the saccade, while in the invalid condition, the saccade target was a number string that turned into a word during the saccade. As hypothesized, the valid preview greatly reduced the fixation-related evoked response. Interestingly, multivariate decoding analyses revealed much earlier preview effects than previously reported for words, and individual decoding performance correlated with participant reading scores. These results demonstrate that a parafoveal preview can influence relatively early aspects of post-saccadic word processing and help to resolve some discrepancies between the word and object literatures.
{"title":"An early effect of the parafoveal preview on post-saccadic processing of English words.","authors":"David Melcher, Ani Alaberkyan, Chrysi Anastasaki, Xiaoyi Liu, Michele Deodato, Gianluca Marsicano, Diogo Almeida","doi":"10.3758/s13414-024-02916-4","DOIUrl":"https://doi.org/10.3758/s13414-024-02916-4","url":null,"abstract":"<p><p>A key aspect of efficient visual processing is to use current and previous information to make predictions about what we will see next. In natural viewing, and when looking at words, there is typically an indication of forthcoming visual information from extrafoveal areas of the visual field before we make an eye movement to an object or word of interest. This \"preview effect\" has been studied for many years in the word reading literature and, more recently, in object perception. Here, we integrated methods from word recognition and object perception to investigate the timing of the preview on neural measures of word recognition. Through a combined use of EEG and eye-tracking, a group of multilingual participants took part in a gaze-contingent, single-shot saccade experiment in which words appeared in their parafoveal visual field. In valid preview trials, the same word was presented during the preview and after the saccade, while in the invalid condition, the saccade target was a number string that turned into a word during the saccade. As hypothesized, the valid preview greatly reduced the fixation-related evoked response. Interestingly, multivariate decoding analyses revealed much earlier preview effects than previously reported for words, and individual decoding performance correlated with participant reading scores. These results demonstrate that a parafoveal preview can influence relatively early aspects of post-saccadic word processing and help to resolve some discrepancies between the word and object literatures.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-26DOI: 10.3758/s13414-024-02918-2
Fuminori Ono
Attention has a significant effect on time perception, as a person’s perception of duration varies depending on the object of one’s attention, even when the visual stimulus is consistent. This study aimed to identify the effects of directing participants’ attention after a stimulus has disappeared on time perception, as prior studies have examined only pre-stimulus direction. The stimulus used comprised two overlapping figures – one large and one small. After the stimulus was removed, participants were asked to judge the length of the presentation time and shape of one of the two figures. Consequently, the participants perceived a longer presentation duration when their attention was directed to a large figure than when directed to a small figure. This finding suggests that even after an event has occurred, the time perception of the event changes depending on the feature receiving one’s attention.
{"title":"Retrospective attention: The effects on time perception","authors":"Fuminori Ono","doi":"10.3758/s13414-024-02918-2","DOIUrl":"10.3758/s13414-024-02918-2","url":null,"abstract":"<div><p>Attention has a significant effect on time perception, as a person’s perception of duration varies depending on the object of one’s attention, even when the visual stimulus is consistent. This study aimed to identify the effects of directing participants’ attention after a stimulus has disappeared on time perception, as prior studies have examined only pre-stimulus direction. The stimulus used comprised two overlapping figures – one large and one small. After the stimulus was removed, participants were asked to judge the length of the presentation time and shape of one of the two figures. Consequently, the participants perceived a longer presentation duration when their attention was directed to a large figure than when directed to a small figure. This finding suggests that even after an event has occurred, the time perception of the event changes depending on the feature receiving one’s attention.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"86 6","pages":"1913 - 1922"},"PeriodicalIF":1.7,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141460928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}