Pub Date : 2026-02-23DOI: 10.3758/s13414-025-03214-3
Geoffrey F. Woodman, Sean M. Polyn
Visual memory allows us to behave adaptively in the world we live. In this tutorial we will review the types of visual memory storage that have been identified. These storage processes begin the instant a visual stimulus appears and continue through to remembering objects and scenes that were encountered decades ago. The different types of memory storage have different properties of capacity and resolution. We will discuss how our memories allow us to link new information to information that we have acquired across our life spans. We will also discuss how this linking between new information and previously acquired visual information is an active process, in which memories shape how we interpret and store new visual inputs.
{"title":"Visual memory","authors":"Geoffrey F. Woodman, Sean M. Polyn","doi":"10.3758/s13414-025-03214-3","DOIUrl":"10.3758/s13414-025-03214-3","url":null,"abstract":"<div><p>Visual memory allows us to behave adaptively in the world we live. In this tutorial we will review the types of visual memory storage that have been identified. These storage processes begin the instant a visual stimulus appears and continue through to remembering objects and scenes that were encountered decades ago. The different types of memory storage have different properties of capacity and resolution. We will discuss how our memories allow us to link new information to information that we have acquired across our life spans. We will also discuss how this linking between new information and previously acquired visual information is an active process, in which memories shape how we interpret and store new visual inputs.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12929288/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-23DOI: 10.3758/s13414-025-03210-7
Olga Polezhaeva, Stefan Glasauer, Michel-Ange Amorim
Visual motion prediction under uncertainty must rely on both statistical and kinematic properties of the stimulus. Here, we investigated how decision-making processes and psychophysical parameters are modulated during extrapolation of random trajectories with different noise characteristics (Random Walk, RDW, or Independently and Identically Distributed, IID). Noise was applied to the horizontal position of a dot moving downward with constant vertical speed and vanishing before reaching the edge of the screen. Participants had to judge whether the dot would reach the edge right or left of the center. In Experiment 1 we varied the side of the last visible horizontal position, optimal for RDW extrapolation, and the mean of all visible positions, optimal for IID, to be either on the same or on opposite sides of the screen center. Experiment 2 investigated how the final segment of an IID path impacts the trajectory extrapolation when the last visible position and the mean of the last segment are on opposite sides of the center. Experiment 3 focused on assessing the accuracy of trajectory perception amid varying levels of noise. Behavioral and DDM (Diffusion Decision Model) analyses revealed that for RDW trajectories, participants relied on the last visible position, reflecting the temporal continuity of the path and leading to faster and more accurate decision making. IID trajectories showed greater variability in prediction strategies, with participants also focusing more on the last segment, as with RDW, rather than the mean position of the whole previous trajectory. However, this strategy works well even for IID paths despite being a suboptimal solution. These findings suggest that the perceptual system favors smooth motion for visual interpretation, aiding in the prediction of uncertain visual trajectories.
{"title":"Prediction of uncertain visual trajectories is biased toward motion continuity","authors":"Olga Polezhaeva, Stefan Glasauer, Michel-Ange Amorim","doi":"10.3758/s13414-025-03210-7","DOIUrl":"10.3758/s13414-025-03210-7","url":null,"abstract":"<div><p>Visual motion prediction under uncertainty must rely on both statistical and kinematic properties of the stimulus. Here, we investigated how decision-making processes and psychophysical parameters are modulated during extrapolation of random trajectories with different noise characteristics (Random Walk, RDW, or Independently and Identically Distributed, IID). Noise was applied to the horizontal position of a dot moving downward with constant vertical speed and vanishing before reaching the edge of the screen. Participants had to judge whether the dot would reach the edge right or left of the center. In Experiment 1 we varied the side of the last visible horizontal position, optimal for RDW extrapolation, and the mean of all visible positions, optimal for IID, to be either on the same or on opposite sides of the screen center. Experiment 2 investigated how the final segment of an IID path impacts the trajectory extrapolation when the last visible position and the mean of the last segment are on opposite sides of the center. Experiment 3 focused on assessing the accuracy of trajectory perception amid varying levels of noise. Behavioral and DDM (Diffusion Decision Model) analyses revealed that for RDW trajectories, participants relied on the last visible position, reflecting the temporal continuity of the path and leading to faster and more accurate decision making. IID trajectories showed greater variability in prediction strategies, with participants also focusing more on the last segment, as with RDW, rather than the mean position of the whole previous trajectory. However, this strategy works well even for IID paths despite being a suboptimal solution. These findings suggest that the perceptual system favors smooth motion for visual interpretation, aiding in the prediction of uncertain visual trajectories.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-23DOI: 10.3758/s13414-025-03220-5
Drew J. McLaughlin, Jackson S. Colvett, Julie M. Bugg, Kristin J. Van Engen
Alternating between different talkers during listening typically incurs a cognitive processing cost. How these processing costs manifest, and potentially differ, in a multi-accent setting remains to be examined. Across two experiments, we investigate (1) whether talker and accent switching costs are driven by engagement of a recalibration mechanism, and (2) whether global listening context affects the magnitude of talker and accent switching costs. The results of our first experiment indicate that switching between speakers of the same second language (L2) accent (e.g., between two Mandarin-accented speakers of English) was less cognitively challenging than switching between speakers of different L2 accents (e.g., between a Mandarin-accented speaker and a Turkish-accented speaker of English). This outcome suggests that the perceptual distance (i.e., the holistic estimate of spectral and temporal differences in acoustic signals) between two speakers’ productions determines the size of associated switching costs, such that recalibration is less cognitively demanding for speakers with the same L2 accent. In our second experiment, we examine whether a more challenging block-wide listening context results in a global upregulation of cognitive resources, and, subsequently, reduces the cognitive resources required to (a) process L2 accent and (b) resolve local talker and accent changes. Here, the overall cognitive demands of processing L2 accent were reduced, as predicted, but talker and accent switching costs were not. We conclude that talker and accent switching are supported by a recalibration mechanism and that global upregulation of cognitive resources may reduce L2 accent processing costs but not local switching costs.
{"title":"Sequence effects during speech perception reveal multi-accent processing costs","authors":"Drew J. McLaughlin, Jackson S. Colvett, Julie M. Bugg, Kristin J. Van Engen","doi":"10.3758/s13414-025-03220-5","DOIUrl":"10.3758/s13414-025-03220-5","url":null,"abstract":"<div><p>Alternating between different talkers during listening typically incurs a cognitive processing cost. How these processing costs manifest, and potentially differ, in a multi-accent setting remains to be examined. Across two experiments, we investigate (1) whether talker and accent switching costs are driven by engagement of a recalibration mechanism, and (2) whether global listening context affects the magnitude of talker and accent switching costs. The results of our first experiment indicate that switching between speakers of the same second language (L2) accent (e.g., between two Mandarin-accented speakers of English) was less cognitively challenging than switching between speakers of different L2 accents (e.g., between a Mandarin-accented speaker and a Turkish-accented speaker of English). This outcome suggests that the perceptual distance (i.e., the holistic estimate of spectral and temporal differences in acoustic signals) between two speakers’ productions determines the size of associated switching costs, such that recalibration is less cognitively demanding for speakers with the same L2 accent. In our second experiment, we examine whether a more challenging block-wide listening context results in a global upregulation of cognitive resources, and, subsequently, reduces the cognitive resources required to (a) process L2 accent and (b) resolve local talker and accent changes. Here, the overall cognitive demands of processing L2 accent were reduced, as predicted, but talker and accent switching costs were not. We conclude that talker and accent switching are supported by a recalibration mechanism and that global upregulation of cognitive resources may reduce L2 accent processing costs but not local switching costs.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12929306/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditionally considered a memory structure, the hippocampus may contribute to visual perception in fundamental ways. Recent evidence suggests that the ability to differentiate highly confusable unfamiliar faces could involve pattern separation, a mnemonic process mediated by the hippocampal dentate gyrus. Hippocampal involvement, however, may be influenced by existing face memories. We tested BL, an individual with rare selective bilateral dentate gyrus lesions accompanied by compromised pattern separation, and 34 control participants to investigate these possibilities. Participants were administered morphed images of nonfamous and famous faces in a standard categorical perception (CP) identification and discrimination experiment, with nonfamous faces especially high in perceptual overlap without the influence of prior knowledge. All participants, including BL, exhibited nonlinear identification of famous faces with a midpoint category boundary. Controls identified newly learned nonfamous faces with lower fidelity and a midpoint category boundary, whereas BL showed a shift in category boundary. When discriminating face pairs, controls showed typical CP effects of better between-category than within-category discrimination—but only for famous faces. BL showed extreme within-category “compression” for both nonfamous and famous faces, reflecting his tendency to pattern complete following suboptimal pattern separation. By using standard tests of CP, we show that the dentate gyrus, by virtue and extent of its pattern separation function, contributes to the CP of faces. This study provides an essential missing link in understanding the perceptual processes and interactions with prior knowledge involved in face processing by the dentate gyrus.
{"title":"The effects of hippocampal dentate gyrus lesions on categorical face perception","authors":"Stevenson Baker, Morris Moscovitch, Ariana Youm, Yarden Levy, R. Shayna Rosenbaum","doi":"10.3758/s13414-025-03216-1","DOIUrl":"10.3758/s13414-025-03216-1","url":null,"abstract":"<div><p>Traditionally considered a memory structure, the hippocampus may contribute to visual perception in fundamental ways. Recent evidence suggests that the ability to differentiate highly confusable unfamiliar faces could involve pattern separation, a mnemonic process mediated by the hippocampal dentate gyrus. Hippocampal involvement, however, may be influenced by existing face memories. We tested BL, an individual with rare selective bilateral dentate gyrus lesions accompanied by compromised pattern separation, and 34 control participants to investigate these possibilities. Participants were administered morphed images of nonfamous and famous faces in a standard categorical perception (CP) identification and discrimination experiment, with nonfamous faces especially high in perceptual overlap without the influence of prior knowledge. All participants, including BL, exhibited nonlinear identification of famous faces with a midpoint category boundary. Controls identified newly learned nonfamous faces with lower fidelity and a midpoint category boundary, whereas BL showed a shift in category boundary. When discriminating face pairs, controls showed typical CP effects of better between-category than within-category discrimination—but only for famous faces. BL showed extreme within-category “compression” for both nonfamous and famous faces, reflecting his tendency to pattern complete following suboptimal pattern separation. By using standard tests of CP, we show that the dentate gyrus, by virtue and extent of its pattern separation function, contributes to the CP of faces. This study provides an essential missing link in understanding the perceptual processes and interactions with prior knowledge involved in face processing by the dentate gyrus.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-23DOI: 10.3758/s13414-025-03206-3
Violet A. Brown, Adina Holloway, Amadou Touré, Salma Ali, Alyssa Alvarez, Tiffany Nyamao, Yuxin Lin, Ostap Hrebeniuk, Julia F. Strand
Listeners typically understand speech more accurately when they can see and hear the talker relative to hearing alone. However, seeing the talker’s face does not necessarily reduce the cognitive costs associated with processing speech as measured by dual-task costs. In difficult listening conditions, dual-task response times may be faster for audiovisual than audio-only speech, but when listening conditions are easy, the presence of a talking face may have no effect on dual task responses or even slow responses relative to listening alone. The current study expanded upon this work by including samples of both native and nonnative English speakers and assessing speech intelligibility, subjective listening effort (Experiment 1), and dual-task costs (Experiment 2) for audio-only and audiovisual speech across multiple noise levels. We found that seeing the talker reduces dual-task costs only in difficult listening conditions in which the visual information is necessary to accurately identify the speech. The effects of background noise and speech modality were robust within groups of native as well as nonnative listeners, suggesting that if researchers are interested in studying general phenomena related to speech processing (i.e., rather than specifically studying how language background affects results), these effects would have emerged regardless of whether the sample was limited to native speakers of English. However, the magnitude of some effects differed for native and nonnative listeners.
{"title":"The dual-task costs of audiovisual benefit: Effects of noise and “native” speaker status","authors":"Violet A. Brown, Adina Holloway, Amadou Touré, Salma Ali, Alyssa Alvarez, Tiffany Nyamao, Yuxin Lin, Ostap Hrebeniuk, Julia F. Strand","doi":"10.3758/s13414-025-03206-3","DOIUrl":"10.3758/s13414-025-03206-3","url":null,"abstract":"<div><p>Listeners typically understand speech more accurately when they can see and hear the talker relative to hearing alone. However, seeing the talker’s face does not necessarily reduce the cognitive costs associated with processing speech as measured by dual-task costs. In difficult listening conditions, dual-task response times may be faster for audiovisual than audio-only speech, but when listening conditions are easy, the presence of a talking face may have no effect on dual task responses or even slow responses relative to listening alone. The current study expanded upon this work by including samples of both native and nonnative English speakers and assessing speech intelligibility, subjective listening effort (Experiment 1), and dual-task costs (Experiment 2) for audio-only and audiovisual speech across multiple noise levels. We found that seeing the talker reduces dual-task costs only in difficult listening conditions in which the visual information is necessary to accurately identify the speech. The effects of background noise and speech modality were robust within groups of native as well as nonnative listeners, suggesting that if researchers are interested in studying general phenomena related to speech processing (i.e., rather than specifically studying how language background affects results), these effects would have emerged regardless of whether the sample was limited to native speakers of English. However, the magnitude of some effects differed for native and nonnative listeners.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-23DOI: 10.3758/s13414-025-03167-7
Andrea De Cesarei, Serena Mastria, Maurizio Codispoti
Values learned through previous experiences of reward can later modulate attentional capture if associated with a distractor in singleton search tasks (value-driven attentional capture; VDAC). Moreover, it has been shown that re-encountering distractor features can facilitate performance or reduce attentional capture (sequential effects). However, little is known about how sequential effects and attentional capture are jointly modulated by learned distractor value. Here, we examined the role of learned reward in sequential modulation of attentional capture. In two experiments we used a VDAC paradigm, varying the type of reward (monetary vs. sustainability-related). After associating letter colors with a high or low reward, or none at all, in a flanker task (learning phase), in a subsequent singleton task (test phase) we manipulated the effects of distractor value of the present and of the previous trial on attentional capture. In both experiments repetition of the same distractor value from trial N-1 to trial N was associated with faster responses, and reward value did not modulate this facilitation. In addition, attentional capture by rewarded, compared with unrewarded, distractors was observed when the preceding trial was unrewarded. Value-signaling distractors, if re-encountered, reduced attentional capture in the current trial, and this happened even for rewarded distractors of different values (e.g., high value followed by low value, and vice versa). These results suggest that, for different forms of incentives, repetition of previously rewarded distractors and attentional capture by the current reward interact in modulating the processing of learned values.
{"title":"Distraction driven by reward history: Attentional capture and sequential effects","authors":"Andrea De Cesarei, Serena Mastria, Maurizio Codispoti","doi":"10.3758/s13414-025-03167-7","DOIUrl":"10.3758/s13414-025-03167-7","url":null,"abstract":"<div><p>Values learned through previous experiences of reward can later modulate attentional capture if associated with a distractor in singleton search tasks (value-driven attentional capture; VDAC). Moreover, it has been shown that re-encountering distractor features can facilitate performance or reduce attentional capture (sequential effects). However, little is known about how sequential effects and attentional capture are jointly modulated by learned distractor value. Here, we examined the role of learned reward in sequential modulation of attentional capture. In two experiments we used a VDAC paradigm, varying the type of reward (monetary vs. sustainability-related). After associating letter colors with a high or low reward, or none at all, in a flanker task (learning phase), in a subsequent singleton task (test phase) we manipulated the effects of distractor value of the present and of the previous trial on attentional capture. In both experiments repetition of the same distractor value from trial N-1 to trial N was associated with faster responses, and reward value did not modulate this facilitation. In addition, attentional capture by rewarded, compared with unrewarded, distractors was observed when the preceding trial was unrewarded. Value-signaling distractors, if re-encountered, reduced attentional capture in the current trial, and this happened even for rewarded distractors of different values (e.g., high value followed by low value, and vice versa). These results suggest that, for different forms of incentives, repetition of previously rewarded distractors and attentional capture by the current reward interact in modulating the processing of learned values.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12929281/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-23DOI: 10.3758/s13414-025-03204-5
Yukyu Araragi, Hiroyuki Ito
The letter-row tilt illusion is the illusion that the row is perceived to be tilted, when a set of letters is repeated in a physically horizontal or vertical row. We quantitatively examined the effects of aspect ratios of letters on the letter-row tilt illusion in horizontal letter-rows with or without a staircase structure of horizontal line segments. In Experiment 1, the results quantitatively showed that the illusion significantly occurred in letter-rows with and without the staircase structure. In Experiment 2, the results showed that the amount of illusion in letter-rows with the staircase structure increased as relative and absolute lengths of the horizontal line segments increased. In Experiment 3, the results showed that the amount of illusion in letter-rows without the staircase structure had different tendencies in aspect ratios from that with the staircase structure. The present study suggested that different mechanisms were responsible for the letter-row tilt illusions with and without the staircase structure.
{"title":"The different effects of aspect ratios of letters on the letter-row tilt illusion in staircase and non-staircase stimuli","authors":"Yukyu Araragi, Hiroyuki Ito","doi":"10.3758/s13414-025-03204-5","DOIUrl":"10.3758/s13414-025-03204-5","url":null,"abstract":"<div><p>The letter-row tilt illusion is the illusion that the row is perceived to be tilted, when a set of letters is repeated in a physically horizontal or vertical row. We quantitatively examined the effects of aspect ratios of letters on the letter-row tilt illusion in horizontal letter-rows with or without a staircase structure of horizontal line segments. In Experiment 1, the results quantitatively showed that the illusion significantly occurred in letter-rows with and without the staircase structure. In Experiment 2, the results showed that the amount of illusion in letter-rows with the staircase structure increased as relative and absolute lengths of the horizontal line segments increased. In Experiment 3, the results showed that the amount of illusion in letter-rows without the staircase structure had different tendencies in aspect ratios from that with the staircase structure. The present study suggested that different mechanisms were responsible for the letter-row tilt illusions with and without the staircase structure.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12929313/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distinct visual processing patterns are one of the underlying mechanisms of atypical facial emotion recognition in individuals with autism spectrum disorder. However, the role of peripheral visual processing, particularly the functional field of view (FFOV), remains unclear. Therefore, this study aimed to examine the relationships among autistic traits, FFOV size, and facial emotion recognition ability. Seventy-five students completed the Autism-Spectrum Quotient (AQ) and then performed facial emotion recognition and FFOV tasks. In the emotion recognition task, participants viewed one of five facial expressions (anger, disgust, fear, happiness, or sadness) on a monitor and selected the word that best described the expression. The FFOV task followed a similar procedure, except that the target digit was presented in the peripheral vision immediately after the facial images disappeared. FFOV size was estimated by fitting psychometric functions to the identification performance of the digits as a function of the target eccentricity. The major findings were: (a) AQ scores did not predict FFOV size, (b) FFOV size was positively correlated with the accuracy of facial emotion recognition, and (c) this correlation became non-significant with lower AQ scores. The findings suggest that peripheral visual processing is associated with facial emotion recognition ability, and that this association varies as a function of autistic traits.
{"title":"Moderating effect of autistic traits on the relationship between peripheral visual processing and facial emotion recognition","authors":"Yuki Harada, Nana Kamei, Chiharu Tsukiyama, Kento Shiozaki, Junji Ohyama, Makoto Wada","doi":"10.3758/s13414-026-03222-x","DOIUrl":"10.3758/s13414-026-03222-x","url":null,"abstract":"<div><p>Distinct visual processing patterns are one of the underlying mechanisms of atypical facial emotion recognition in individuals with autism spectrum disorder. However, the role of peripheral visual processing, particularly the functional field of view (FFOV), remains unclear. Therefore, this study aimed to examine the relationships among autistic traits, FFOV size, and facial emotion recognition ability. Seventy-five students completed the Autism-Spectrum Quotient (AQ) and then performed facial emotion recognition and FFOV tasks. In the emotion recognition task, participants viewed one of five facial expressions (anger, disgust, fear, happiness, or sadness) on a monitor and selected the word that best described the expression. The FFOV task followed a similar procedure, except that the target digit was presented in the peripheral vision immediately after the facial images disappeared. FFOV size was estimated by fitting psychometric functions to the identification performance of the digits as a function of the target eccentricity. The major findings were: (a) AQ scores did not predict FFOV size, (b) FFOV size was positively correlated with the accuracy of facial emotion recognition, and (c) this correlation became non-significant with lower AQ scores. The findings suggest that peripheral visual processing is associated with facial emotion recognition ability, and that this association varies as a function of autistic traits.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12929263/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-23DOI: 10.3758/s13414-025-03172-w
Melissa A. Schoenlein, Mouloukou Sidibe, Karen B. Schloss
When interpreting data visualizations, people have expectations of how colors should map onto quantities. These expectations are constructed from multiple biases, including the dark-is-more bias (darker colors represent larger quantities) and the opaque-is-more bias (regions appearing more opaque represent larger quantities), among others. The extent to which any one bias influences interpretations of data visualizations depends on the degree to which that bias is applicable for a given visualization (applicability principle) and its relative weight in combination with other biases (combination principle). However, basic questions remain concerning the perceptual conditions necessary to activate such biases so they become applicable. For example, in previous studies of the opaque-is-more bias, the test stimuli appeared to vary in opacity because they were created by interpolating between a “base” color and a background color, which was lighter or darker than the base color. As such, opacity variation was confounded with large lightness variation. From prior work, it is unknown whether the opaque-is-more bias can be activated without substantial lightness variation. Here, we varied opacity by varying colormap saturation relative to the background while reducing lightness contrast (holding L* in CIELAB constant). We found that the opaque-is-more bias can indeed be activated without substantial lightness variation. In the process, we also found evidence for a new, “saturated-is-more bias,” leading to expectations that regions greater in saturation map to larger magnitudes. These findings extend knowledge of how people infer meaning from visual features and can translate to inform design of effective information visualizations.
{"title":"Understanding the opaque-is-more bias and saturated-is-more bias for colormap data visualizations","authors":"Melissa A. Schoenlein, Mouloukou Sidibe, Karen B. Schloss","doi":"10.3758/s13414-025-03172-w","DOIUrl":"10.3758/s13414-025-03172-w","url":null,"abstract":"<div><p>When interpreting data visualizations, people have expectations of how colors should map onto quantities. These expectations are constructed from multiple biases, including the dark-is-more bias (darker colors represent larger quantities) and the opaque-is-more bias (regions appearing more opaque represent larger quantities), among others. The extent to which any one bias influences interpretations of data visualizations depends on the degree to which that bias is applicable for a given visualization (applicability principle) and its relative weight in combination with other biases (combination principle). However, basic questions remain concerning the perceptual conditions necessary to activate such biases so they become applicable. For example, in previous studies of the opaque-is-more bias, the test stimuli appeared to vary in opacity because they were created by interpolating between a “base” color and a background color, which was lighter or darker than the base color. As such, opacity variation was confounded with large lightness variation. From prior work, it is unknown whether the opaque-is-more bias can be activated without substantial lightness variation. Here, we varied opacity by varying colormap saturation relative to the background while reducing lightness contrast (holding L* in CIELAB constant). We found that the opaque-is-more bias can indeed be activated without substantial lightness variation. In the process, we also found evidence for a new, “saturated-is-more bias,” leading to expectations that regions greater in saturation map to larger magnitudes. These findings extend knowledge of how people infer meaning from visual features and can translate to inform design of effective information visualizations.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12929228/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-16DOI: 10.3758/s13414-026-03231-w
Anthony W. Sali, Emily E. Oor
{"title":"Correction to: Serial processing of stimulus identity and shift readiness predictions","authors":"Anthony W. Sali, Emily E. Oor","doi":"10.3758/s13414-026-03231-w","DOIUrl":"10.3758/s13414-026-03231-w","url":null,"abstract":"","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12909481/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146203810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}