Pub Date : 2025-12-29DOI: 10.3758/s13414-025-03212-5
Yanna Ren, Ruizhi Li, Jinglun Yu, Ao Guo, Xiangfu Yang, Hewu Zheng, Chen Huang, Yulin Gao, Weiping Yang
This study aimed to investigate the effects of audiovisual N-back task training on working memory and audiovisual integration ability in older adults. Twenty healthy older adults underwent 40 sessions of audiovisual N-back training, while 18 healthy older adults served as controls. Event-related potentials (ERPs) and performance data were collected at baseline and end of the training period. The results indicated that working memory in older adults gradually improved with training. In the audiovisual 3-back task, the training enhanced the discriminability index (d') and reduced the latency of the N2 component evoked by target stimuli in older adults, compared with the control group. Furthermore, the training significantly enhanced the audiovisual integration abilities of older adults at the earlier stage of processing in 180–200 ms. This study demonstrates that audiovisual N-back training effectively improves working memory and early-stage audiovisual integration abilities in older adults. The findings highlight the potential of audiovisual N-back task training as an efficient method for enhancing cognitive and perceptual abilities in older adults and counteracting age-related brain decline.
{"title":"Audiovisual N-back training in older adults: Benefits to working memory and audiovisual integration","authors":"Yanna Ren, Ruizhi Li, Jinglun Yu, Ao Guo, Xiangfu Yang, Hewu Zheng, Chen Huang, Yulin Gao, Weiping Yang","doi":"10.3758/s13414-025-03212-5","DOIUrl":"10.3758/s13414-025-03212-5","url":null,"abstract":"<div><p>This study aimed to investigate the effects of audiovisual <i>N-</i>back task training on working memory and audiovisual integration ability in older adults. Twenty healthy older adults underwent 40 sessions of audiovisual <i>N-</i>back training, while 18 healthy older adults served as controls. Event-related potentials (ERPs) and performance data were collected at baseline and end of the training period. The results indicated that working memory in older adults gradually improved with training. In the audiovisual 3-back task, the training enhanced the discriminability index (<i>d'</i>) and reduced the latency of the N2 component evoked by target stimuli in older adults, compared with the control group. Furthermore, the training significantly enhanced the audiovisual integration abilities of older adults at the earlier stage of processing in 180–200 ms. This study demonstrates that audiovisual <i>N-</i>back training effectively improves working memory and early-stage audiovisual integration abilities in older adults. The findings highlight the potential of audiovisual <i>N-</i>back task training as an efficient method for enhancing cognitive and perceptual abilities in older adults and counteracting age-related brain decline.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145859447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-29DOI: 10.3758/s13414-025-03209-0
XingXuan Fang
This study aimed to investigate the perception and processing of speech in noise among musicians and nonmusicians. The study included 60 participants, ages 25–35, divided equally between musicians with formal training and nonmusicians without structured musical experience. Participants’ hearing levels were assessed through pure tone audiometry to confirm normal hearing function. During the study, all participants were required to listen to audio recordings with articulation tables overlaid with white noise. The results showed that the Articulation Index (AI) for recognizing stop consonants was 8.0 out of 10 for musicians and 6.5 for nonmusicians. A 0–10 scale was used to determine the AI score values, which were 7.5 for musicians and 6.0 for nonmusicians. Higher scores on this scale indicate better speech perception in noisy environments. It measures overall intelligibility. The final %ALcons for listening to stop consonants was 21% for musicians and 25% for nonmusicians. During listening to fricative consonants, this indicator was 33% for musicians and 37% for nonmusicians. These findings indicate that musicians demonstrate better speech perception in noise. While these results suggest potential applications of musical training for auditory rehabilitation (e.g., in cases like Wernicke’s aphasia), further longitudinal research is required to establish causation.
{"title":"Speech processing in noise and the ability to differentiate sounds by musicians and nonmusicians","authors":"XingXuan Fang","doi":"10.3758/s13414-025-03209-0","DOIUrl":"10.3758/s13414-025-03209-0","url":null,"abstract":"<div><p>This study aimed to investigate the perception and processing of speech in noise among musicians and nonmusicians. The study included 60 participants, ages 25–35, divided equally between musicians with formal training and nonmusicians without structured musical experience. Participants’ hearing levels were assessed through pure tone audiometry to confirm normal hearing function. During the study, all participants were required to listen to audio recordings with articulation tables overlaid with white noise. The results showed that the Articulation Index (AI) for recognizing stop consonants was 8.0 out of 10 for musicians and 6.5 for nonmusicians. A 0–10 scale was used to determine the AI score values, which were 7.5 for musicians and 6.0 for nonmusicians. Higher scores on this scale indicate better speech perception in noisy environments. It measures overall intelligibility. The final %ALcons for listening to stop consonants was 21% for musicians and 25% for nonmusicians. During listening to fricative consonants, this indicator was 33% for musicians and 37% for nonmusicians. These findings indicate that musicians demonstrate better speech perception in noise. While these results suggest potential applications of musical training for auditory rehabilitation (e.g., in cases like Wernicke’s aphasia), further longitudinal research is required to establish causation.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145851371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-28DOI: 10.3758/s13414-025-03162-y
Effie J. Pereira, Jelena Ristic
Attractive faces attract attention. Here, we examined how facial attractiveness influenced covert and overt social attention. Participants discriminated targets occurring after one of 32 different face–object cue pairs, which varied in the degree of attractiveness. Experiment 1 measured covert social attention in manual responses while maintaining central fixation. No evidence of attentional preference for faces was found. Experiment 2 measured overt social attention in eye movements while maintaining natural viewing conditions. A reliable oculomotor preference for attractive faces was found. Thus, facial attractiveness affects covert and overt social attention differently, reflecting the diverging ways in which faces affect attention with respect to social functioning in daily life. The datasets for all experiments can be found on the Open Science Framework (https://osf.io/u54tp/).
{"title":"Beauty in the eye of the beholder: Attention to attractive faces dissociates across covert and overt measures","authors":"Effie J. Pereira, Jelena Ristic","doi":"10.3758/s13414-025-03162-y","DOIUrl":"10.3758/s13414-025-03162-y","url":null,"abstract":"<div><p>Attractive faces attract attention. Here, we examined how facial attractiveness influenced covert and overt social attention. Participants discriminated targets occurring after one of 32 different face–object cue pairs, which varied in the degree of attractiveness. Experiment 1 measured covert social attention in manual responses while maintaining central fixation. No evidence of attentional preference for faces was found. Experiment 2 measured overt social attention in eye movements while maintaining natural viewing conditions. A reliable oculomotor preference for attractive faces was found. Thus, facial attractiveness affects covert and overt social attention differently, reflecting the diverging ways in which faces affect attention with respect to social functioning in daily life. The datasets for all experiments can be found on the Open Science Framework (https://osf.io/u54tp/).</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145851310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-28DOI: 10.3758/s13414-025-03189-1
Samantha C. Lee, Lars Strother
We used a dual-task paradigm to study the relationship between lateralized face perception and attention by measuring the costs of dividing attention between faces viewed in opposite visual fields. Observers performed judgments of either the sex, orientation, or color of one (single-task, cued) or both (dual-task, uncued) faces in a tachistoscopically viewed pair. We observed dual-task costs (i.e., decreased accuracy for dual- relative to single-task performance) for categorical judgments of face sex (female/male) and orientation (upright/inverted), both of which necessitated visual processing of face-specific information. We did not observe costs for judgments of face color (red-tinted/greyscale), which could be performed without processing face-specific visual information per se. We also observed an unexpected “feature contrast” effect of category-incongruency for judgments of face sex, such that observers showed no dual-task cost when faces in a pair belonged to opposite categories (e.g., female/male) as compared with the same category (e.g., female/female). Finally, dual-task costs for judgments of face sex and orientation (but not color) showed a left visual field (LVF) advantage: dual-task costs were greater in the right visual field (RVF) than in the LVF. We interpret this LVF cost advantage for judgments of face sex and orientation as indicative of the type of visual processing needed to perform face-based judgments. Our results show, for the first time, that the LVF advantage in face perception is directly related to capacity limits induced by divided attention to faces.
{"title":"Lateralized costs of divided attention to faces","authors":"Samantha C. Lee, Lars Strother","doi":"10.3758/s13414-025-03189-1","DOIUrl":"10.3758/s13414-025-03189-1","url":null,"abstract":"<div><p>We used a dual-task paradigm to study the relationship between lateralized face perception and attention by measuring the costs of dividing attention between faces viewed in opposite visual fields. Observers performed judgments of either the sex, orientation, or color of one (single-task, cued) or both (dual-task, uncued) faces in a tachistoscopically viewed pair. We observed dual-task costs (i.e., decreased accuracy for dual- relative to single-task performance) for categorical judgments of face sex (female/male) and orientation (upright/inverted), both of which necessitated visual processing of face-specific information. We did not observe costs for judgments of face color (red-tinted/greyscale), which could be performed without processing face-specific visual information per se. We also observed an unexpected “feature contrast” effect of category-incongruency for judgments of face sex, such that observers showed no dual-task cost when faces in a pair belonged to opposite categories (e.g., female/male) as compared with the same category (e.g., female/female). Finally, dual-task costs for judgments of face sex and orientation (but not color) showed a left visual field (LVF) advantage: dual-task costs were greater in the right visual field (RVF) than in the LVF. We interpret this LVF cost advantage for judgments of face sex and orientation as indicative of the type of visual processing needed to perform face-based judgments. Our results show, for the first time, that the LVF advantage in face perception is directly related to capacity limits induced by divided attention to faces. </p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12745327/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145851341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Talker identification categorizes variable speech signals into stable talker representations, a process facilitated by language and accent familiarity. The dual learning systems (DLS) model posits that speech category learning involves a “reflective” system based on explicit rules and a “reflexive” system based on stimulus-reward associations, with reflexive learning dominating in later stages. In this study, we leverage the DLS framework to investigate talker learning by training Mandarin-speaking listeners to identify talkers in native (Mandarin) and nonnative languages with native (English) or nonnative, but familiar accents (Mandarin-accented English) contexts. Listeners received either using full (e.g., Incorrect. It’s Talker 1) or minimally informative (e.g., Incorrect) feedback, encouraging reflective or reflexive learning, respectively. We assessed identification performance through accuracy and response times and analyzed the underlying decision processes using drift diffusion models. Results showed that language and accent familiarity improved accuracy and response times. At later training stages, minimal feedback, which promotes reflexive learning according to the DLS model, facilitated faster identification and more efficient decision-making, particularly in the nonnative language context (English). The findings highlight the benefit of reflexive learning in talker identification through improved response efficiency and the need to consider decision dynamics in this process. The data, materials, and analysis code are available online (https://osf.io/g7r9q/).
{"title":"Dual learning systems in talker identification: the effects of language, accent, and feedback","authors":"Shengyue Xiong, Zhe-chen Guo, Casey L. Roark, Gangyi Feng, Bharath Chandrasekaran","doi":"10.3758/s13414-025-03201-8","DOIUrl":"10.3758/s13414-025-03201-8","url":null,"abstract":"<div><p>Talker identification categorizes variable speech signals into stable talker representations, a process facilitated by language and accent familiarity. The dual learning systems (DLS) model posits that speech category learning involves a “reflective” system based on explicit rules and a “reflexive” system based on stimulus-reward associations, with reflexive learning dominating in later stages. In this study, we leverage the DLS framework to investigate talker learning by training Mandarin-speaking listeners to identify talkers in native (Mandarin) and nonnative languages with native (English) or nonnative, but familiar accents (Mandarin-accented English) contexts. Listeners received either using full (e.g., <i>Incorrect. It’s Talker 1</i>) or minimally informative (e.g., <i>Incorrect</i>) feedback, encouraging reflective or reflexive learning, respectively. We assessed identification performance through accuracy and response times and analyzed the underlying decision processes using drift diffusion models. Results showed that language and accent familiarity improved accuracy and response times. At later training stages, minimal feedback, which promotes reflexive learning according to the DLS model, facilitated faster identification and more efficient decision-making, particularly in the nonnative language context (English). The findings highlight the benefit of reflexive learning in talker identification through improved response efficiency and the need to consider decision dynamics in this process. The data, materials, and analysis code are available online (https://osf.io/g7r9q/).</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03201-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145831197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18DOI: 10.3758/s13414-025-03205-4
Conne George, Michael S. Pratte
A fundamental question in visual working memory research is whether memories are comprised of collections of features, or are feature-bound object-based representations. The role of location has been particularly important, as many theories posit that location is a necessary feature to which others are bound. We employ novel variants of change-detection tasks to test the possibility that the features of an object can be stored in visual working memory even without a corresponding memory for their studied locations. In Experiment 1 memory capacity for the colors of studied items was higher than for the conjunction of colors and their locations, implying that some colors were stored without accurate location information. In Experiment 2, this result replicated under conditions of articulatory suppression, suggesting that the presence of colors without accurate locations was not due to verbal encoding. The results of Experiment 3 suggest that for some remembered items, location memory is merely imprecise, but that it can also be completely absent despite accurate memory for the color of that item. These results suggest that location is not a necessary feature of working memory storage, but rather that features such as color can be successfully stored without a memory for their location.
{"title":"Features without their locations in visual working memory: Evidence from change-detection tasks","authors":"Conne George, Michael S. Pratte","doi":"10.3758/s13414-025-03205-4","DOIUrl":"10.3758/s13414-025-03205-4","url":null,"abstract":"<div><p>A fundamental question in visual working memory research is whether memories are comprised of collections of features, or are feature-bound object-based representations. The role of location has been particularly important, as many theories posit that location is a necessary feature to which others are bound. We employ novel variants of change-detection tasks to test the possibility that the features of an object can be stored in visual working memory even without a corresponding memory for their studied locations. In Experiment 1 memory capacity for the colors of studied items was higher than for the conjunction of colors and their locations, implying that some colors were stored without accurate location information. In Experiment 2, this result replicated under conditions of articulatory suppression, suggesting that the presence of colors without accurate locations was not due to verbal encoding. The results of Experiment 3 suggest that for some remembered items, location memory is merely imprecise, but that it can also be completely absent despite accurate memory for the color of that item. These results suggest that location is not a necessary feature of working memory storage, but rather that features such as color can be successfully stored without a memory for their location.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.3758/s13414-025-03185-5
Andrea Dissegna, Luca Betteto, Matteo De Tommaso, Massimo Turatto
Irrelevant peripheral visual onsets have consistently been shown to interfere with target processing, a phenomenon attributed to their ability to divert attention from the target. Here we show that in addition to their detrimental effect on performance, irrelevant visual onsets may also facilitate target discrimination. However, this beneficial effect only emerges once habituation mechanisms have fully abolished onset capture. At a 20% onset rate, onsets produced only interference, with capture habituating across blocks of trials. At 50% and 80% rates, a stronger habituation was observed, and once capture was eliminated, onsets began to facilitate performance, as evidenced by faster response times when onsets were present compared to when they were absent. A further experiment demonstrated that visual onsets facilitate performance by allowing temporal expectation about the target moment of appearance, rather than by a generic alerting effect. These findings demonstrate that irrelevant visual onsets trigger two independent processes in the nervous system, resulting in two opposite effects on performance: interference due to attentional capture and facilitation due to temporal expectation. Our results highlight the flexibility of the attentional system in utilizing the same stimulus representation for different purposes, exogenous orienting with subsequent habituation, and temporal orienting, both of which capitalize on stimulus regularities to optimize processing efficiency.
{"title":"The dual impact of irrelevant visual onsets: Habituation of capture unlocks onset facilitation","authors":"Andrea Dissegna, Luca Betteto, Matteo De Tommaso, Massimo Turatto","doi":"10.3758/s13414-025-03185-5","DOIUrl":"10.3758/s13414-025-03185-5","url":null,"abstract":"<div><p>Irrelevant peripheral visual onsets have consistently been shown to interfere with target processing, a phenomenon attributed to their ability to divert attention from the target. Here we show that in addition to their detrimental effect on performance, irrelevant visual onsets may also facilitate target discrimination. However, this beneficial effect only emerges once habituation mechanisms have fully abolished onset capture. At a 20% onset rate, onsets produced only interference, with capture habituating across blocks of trials. At 50% and 80% rates, a stronger habituation was observed, and once capture was eliminated, onsets began to facilitate performance, as evidenced by faster response times when onsets were present compared to when they were absent. A further experiment demonstrated that visual onsets facilitate performance by allowing temporal expectation about the target moment of appearance, rather than by a generic alerting effect. These findings demonstrate that irrelevant visual onsets trigger two independent processes in the nervous system, resulting in two opposite effects on performance: interference due to attentional capture and facilitation due to temporal expectation. Our results highlight the flexibility of the attentional system in utilizing the same stimulus representation for different purposes, exogenous orienting with subsequent habituation, and temporal orienting, both of which capitalize on stimulus regularities to optimize processing efficiency.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03185-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.3758/s13414-025-03186-4
Connor Wessel, Cindy Zhang, Michael Schutz
Although duration perception is well-researched in the auditory literature, many experiments ostensibly exploring generalized processing use one type of tone—simplistic “beeps” with abrupt offsets. This leaves unaddressed the question of how we perceive duration when listening to the types of temporally complex sounds common in everyday listening. Here, we investigate the point of equivalence for the duration of steady state (aka “flat”) and more natural decaying (aka “percussive”) tones. Through this, we (1) gain further insight into amplitude envelope’s role in duration perception and (2) provide guidance useful for future studies moving beyond simplistic tones with flat amplitude envelopes. Specifically, we conduct a series of 2-alternative forced-choice adaptive staircase procedures across three experiments, with participants deciding which of two tones sound longer. Experiment 1 uses sounds matched in amplitude envelope (homogenous, N = 54), and Experiment 2 uses mismatched sounds (heterogenous, N = 55). In Experiment 3, participants completed both homogenous and heterogenous conditions across 10 sessions (N = 5). The heterogenous data illustrate a two-parameter linear equation ((y=110+1.31x)) best describes the point of subjective equality between flat and percussive tones, with model comparisons suggesting most unexplained variance can be attributed to individual differences. Together, these findings provide a useful step towards clarifying the perception of tones with amplitude envelopes more complex than those traditionally used in auditory perception studies, which holds important implications for both our theoretical understanding of perceived timing as well as ongoing applied work on improving hospital medical device sounds (which often use flat tones).
{"title":"Amplitude envelope and subjective duration: Quantifying the role of decaying offsets in timing perception","authors":"Connor Wessel, Cindy Zhang, Michael Schutz","doi":"10.3758/s13414-025-03186-4","DOIUrl":"10.3758/s13414-025-03186-4","url":null,"abstract":"<div><p>Although duration perception is well-researched in the auditory literature, many experiments ostensibly exploring generalized processing use one type of tone—simplistic “beeps” with abrupt offsets. This leaves unaddressed the question of how we perceive duration when listening to the types of temporally complex sounds common in everyday listening. Here, we investigate the point of equivalence for the duration of steady state (aka “flat”) and more natural decaying (aka “percussive”) tones. Through this, we (1) gain further insight into amplitude envelope’s role in duration perception and (2) provide guidance useful for future studies moving beyond simplistic tones with flat amplitude envelopes. Specifically, we conduct a series of 2-alternative forced-choice adaptive staircase procedures across three experiments, with participants deciding which of two tones sound longer. Experiment 1 uses sounds matched in amplitude envelope (homogenous, <i>N</i> = 54), and Experiment 2 uses mismatched sounds (heterogenous, <i>N</i> = 55). In Experiment 3, participants completed both homogenous and heterogenous conditions across 10 sessions (<i>N</i> = 5). The heterogenous data illustrate a two-parameter linear equation (<span>(y=110+1.31x)</span>) best describes the point of subjective equality between flat and percussive tones, with model comparisons suggesting most unexplained variance can be attributed to individual differences. Together, these findings provide a useful step towards clarifying the perception of tones with amplitude envelopes more complex than those traditionally used in auditory perception studies, which holds important implications for both our theoretical understanding of perceived timing as well as ongoing applied work on improving hospital medical device sounds (which often use flat tones).</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-08DOI: 10.3758/s13414-025-03183-7
Jiaxuan Teng, Eve A. Isham
Understanding time is crucial for our survival, influencing tasks that require coordination, alignment, and cognitive assessments. However, the process of learning and monitoring of temporal errors remains unclear. A subset of studies has shown that, unlike other modalities of magnitudes, perceptual learning in the temporal domain may not benefit from error feedback, suggesting that temporal perceptual learning may involve a distinct process that differs from other non-temporal information. We hypothesize this may be attributed to the concept of time being deeply and internally rooted within each organism and therefore may better benefit from an evaluation process that is done internally rather than from external feedback. To further investigate how we learn to time, the current study examines the learning rate, specificity, and transferability as a function of feedback method (explicit feedback and self-reflected metacognitive evaluation) during a temporal production task. The examination is also conducted in conjunction with a line production task to determine if the results diverge for temporal and spatial domains. Our results showed that spatial performance improved across all feedback conditions. However, improvements in temporal accuracy were slower and less pronounced regardless of feedback type. Further analysis revealed that participants were aware of the direction and magnitude of their errors, even when accuracy did not improve, highlighting a potential role for metacognitive insight that supports error monitoring and may aid learning transfer. These findings are discussed with respect to the cognitive mechanisms underlying temporal learning.
{"title":"Determining the potential benefits of error feedback and metacognition on perceptual learning in the temporal and spatial domain","authors":"Jiaxuan Teng, Eve A. Isham","doi":"10.3758/s13414-025-03183-7","DOIUrl":"10.3758/s13414-025-03183-7","url":null,"abstract":"<div><p>Understanding time is crucial for our survival, influencing tasks that require coordination, alignment, and cognitive assessments. However, the process of learning and monitoring of temporal errors remains unclear. A subset of studies has shown that, unlike other modalities of magnitudes, perceptual learning in the temporal domain may not benefit from error feedback, suggesting that temporal perceptual learning may involve a distinct process that differs from other non-temporal information. We hypothesize this may be attributed to the concept of time being deeply and internally rooted within each organism and therefore may better benefit from an evaluation process that is done internally rather than from external feedback. To further investigate how we learn to time, the current study examines the learning rate, specificity, and transferability as a function of feedback method (explicit feedback and self-reflected metacognitive evaluation) during a temporal production task. The examination is also conducted in conjunction with a line production task to determine if the results diverge for temporal and spatial domains. Our results showed that spatial performance improved across all feedback conditions. However, improvements in temporal accuracy were slower and less pronounced regardless of feedback type. Further analysis revealed that participants were aware of the direction and magnitude of their errors, even when accuracy did not improve, highlighting a potential role for metacognitive insight that supports error monitoring and may aid learning transfer. These findings are discussed with respect to the cognitive mechanisms underlying temporal learning.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145702868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-07DOI: 10.3758/s13414-025-03169-5
Ben Sclodnick, Hong-Jin Sun, Bruce Milliken
This study examined the influence of top-down preparation on singleton search performance. The method involved presentation of a single item that was unpredictably blue or orange, followed by a singleton search display that was unpredictably a blue target with orange distractors or vice versa. Preparation was instantiated by instructing participants to respond to the single item only if it was a particular colour (e.g., “respond only to blue single items”). The subsequent colour-singleton search target was either blue or orange. In a prior study with this method, participants prepared for the same single-item colour on all trials, and search performance was more than 200 ms faster when the prepared-for colour matched the colour singleton target than when it mismatched the colour singleton target (Sclodnick et al., Canadian Journal of Experimental Psychology/Revue Canadienne de Psychologie Expérimentale, 78, 129–135, 2024). In the present study, Experiments 1, 2a/2b, and 3a/3b demonstrate that a similar but smaller magnitude effect occurs when preparation for a particular single item colour is cued randomly from trial to trial. Experiments 2a/2b demonstrate that this preparatory effect is sensitive to the temporal interval between single-item and search tasks, but only when preparation is cued on a trial-to-trial basis. Experiments 3a/3b demonstrate that this preparatory effect is reduced with increases in display size, but still robust with display sizes up to nine items. Together, the results demonstrate that memory representations that result from both a single instance of top-down preparatory control and multiple similar instances of top-down preparatory control can carry over to influence subsequent singleton search performance.
本研究考察了自上而下的准备对单例搜索性能的影响。该方法包括展示一个不可预测的蓝色或橙色的单一项目,然后是一个不可预测的蓝色目标和橙色干扰物的单一搜索显示,反之亦然。准备工作通过指示参与者仅在单个项目是特定颜色时才对其作出反应(例如,“仅对蓝色单个项目作出反应”)来实例化。随后的单一颜色搜索目标要么是蓝色,要么是橙色。在之前的一项使用这种方法的研究中,参与者在所有试验中都为相同的单一项目颜色做准备,当准备的颜色与单一颜色目标匹配时,搜索速度比与单一颜色目标不匹配时快200多毫秒(Sclodnick et al.,加拿大实验心理学杂志/Revue Canadienne de Psychologie expemomentale, 78, 129- 135,2024)。在本研究中,实验1、2a/2b和3a/3b表明,当一个特定的单一项目颜色的准备在试验中随机提示时,会出现类似但较小的幅度效应。实验2a/2b表明,这种准备效应对单项目任务和搜索任务之间的时间间隔敏感,但仅当准备在试验对试验的基础上被提示时才敏感。实验3a/3b表明,这种预备效应随着显示尺寸的增加而减少,但在显示尺寸高达9个项目时仍然很强劲。总之,结果表明,由单个自上而下的准备控制实例和多个类似的自上而下的准备控制实例产生的内存表示可以延续,从而影响后续的单例搜索性能。
{"title":"Top-down preparation contributes to intertrial priming in singleton search","authors":"Ben Sclodnick, Hong-Jin Sun, Bruce Milliken","doi":"10.3758/s13414-025-03169-5","DOIUrl":"10.3758/s13414-025-03169-5","url":null,"abstract":"<div><p>This study examined the influence of top-down preparation on singleton search performance. The method involved presentation of a single item that was unpredictably blue or orange, followed by a singleton search display that was unpredictably a blue target with orange distractors or vice versa. Preparation was instantiated by instructing participants to respond to the single item only if it was a particular colour (e.g., “respond only to blue single items”). The subsequent colour-singleton search target was either blue or orange. In a prior study with this method, participants prepared for the same single-item colour on all trials, and search performance was more than 200 ms faster when the prepared-for colour matched the colour singleton target than when it mismatched the colour singleton target (Sclodnick et al., <i>Canadian Journal of Experimental Psychology/Revue Canadienne de Psychologie Expérimentale</i>, <i>78</i>, 129–135, 2024). In the present study, Experiments 1, 2a/2b, and 3a/3b demonstrate that a similar but smaller magnitude effect occurs when preparation for a particular single item colour is cued randomly from trial to trial. Experiments 2a/2b demonstrate that this preparatory effect is sensitive to the temporal interval between single-item and search tasks, but only when preparation is cued on a trial-to-trial basis. Experiments 3a/3b demonstrate that this preparatory effect is reduced with increases in display size, but still robust with display sizes up to nine items. Together, the results demonstrate that memory representations that result from both a single instance of top-down preparatory control and multiple similar instances of top-down preparatory control can carry over to influence subsequent singleton search performance.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145702834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}