Pub Date : 2025-01-17DOI: 10.3758/s13423-024-02634-w
Jasper de Waard, Jan Theeuwes, Louisa Bogaerts
In an auditory statistical learning paradigm, listeners learn to partition a continuous stream of syllables by discovering the repeating syllable patterns that constitute the speech stream. Here, we ask whether auditory statistical learning benefits from spaced exposure compared with massed exposure. In a longitudinal online study on Prolific, we exposed 100 participants to the regularities in a spaced way (i.e., with exposure blocks spread out over 3 days) and another 100 in a massed way (i.e., with all exposure blocks lumped together on a single day). In the exposure phase, participants listened to streams composed of pairs while responding to a target syllable. The spaced and massed groups exhibited equal learning during exposure, as indicated by a comparable response-time advantage for predictable target syllables. However, in terms of resulting long-term knowledge, we observed a benefit from spaced exposure. Following a 2-week delay period, we tested participants' knowledge of the pairs in a forced-choice test. While both groups performed above chance, the spaced group had higher accuracy. Our findings speak to the importance of the timing of exposure to structured input and also for statistical learning outside of the laboratory (e.g., in language development), and imply that current investigations of auditory statistical learning likely underestimate human statistical learning abilities.
{"title":"Taking time: Auditory statistical learning benefits from distributed exposure.","authors":"Jasper de Waard, Jan Theeuwes, Louisa Bogaerts","doi":"10.3758/s13423-024-02634-w","DOIUrl":"https://doi.org/10.3758/s13423-024-02634-w","url":null,"abstract":"<p><p>In an auditory statistical learning paradigm, listeners learn to partition a continuous stream of syllables by discovering the repeating syllable patterns that constitute the speech stream. Here, we ask whether auditory statistical learning benefits from spaced exposure compared with massed exposure. In a longitudinal online study on Prolific, we exposed 100 participants to the regularities in a spaced way (i.e., with exposure blocks spread out over 3 days) and another 100 in a massed way (i.e., with all exposure blocks lumped together on a single day). In the exposure phase, participants listened to streams composed of pairs while responding to a target syllable. The spaced and massed groups exhibited equal learning during exposure, as indicated by a comparable response-time advantage for predictable target syllables. However, in terms of resulting long-term knowledge, we observed a benefit from spaced exposure. Following a 2-week delay period, we tested participants' knowledge of the pairs in a forced-choice test. While both groups performed above chance, the spaced group had higher accuracy. Our findings speak to the importance of the timing of exposure to structured input and also for statistical learning outside of the laboratory (e.g., in language development), and imply that current investigations of auditory statistical learning likely underestimate human statistical learning abilities.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143010497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-17DOI: 10.3758/s13423-024-02637-7
Yun Wen, Jonathan Grainger
A recent study (Wen et al., Journal of Experimental Psychology: Human Perception and Performance, 50: 934-941, 2024) found no influence of relative word-length on transposed-word effects. However, following the tradition of prior research on effects of transposed words during sentence reading, the transposed words in that study were adjacent words (words at positions 2 and 3 or 3 and 4 in five-word sequences). We surmised that the absence of an influence of relative word-length might be due to word identification being too precise when the two words are located close to eye-fixation location, hence cancelling the impact of more approximate indices of word identity such as word length. We therefore hypothesized that relative word-length might impact on transposed-word effects when the transposition involves non-adjacent words. The present study put this hypothesis to test and found that relative word-length does modify the size of transposed-word effects with non-adjacent transpositions. Transposed-word effects are greater when the transposed words have the same length. Furthermore, a cross-study analysis confirmed that transposed-word effects are greater for adjacent than for non-adjacent transpositions.
最近的一项研究(Wen et al., Journal of Experimental Psychology: Human Perception and Performance, 50: 934-941, 2024)发现相对单词长度对转置单词效应没有影响。然而,根据以往对换句词在句子阅读过程中的影响的研究传统,该研究中的换句词是相邻词(五词序列中位置2和3或3和4的词)。我们推测,相对词长没有影响可能是由于当两个词靠近眼睛注视的位置时,单词识别过于精确,从而抵消了更近似的单词身份指标(如单词长度)的影响。因此,我们假设当转置涉及非相邻单词时,相对单词长度可能会影响转置单词效应。本研究对这一假设进行了检验,发现相对词长确实会改变非相邻转置词的转置效应的大小。当调换后的单词长度相同时,调换后的单词效果更大。此外,一项交叉研究分析证实,相邻的转置词效应比非相邻的转置词更大。
{"title":"The impact of relative word-length on effects of non-adjacent word transpositions.","authors":"Yun Wen, Jonathan Grainger","doi":"10.3758/s13423-024-02637-7","DOIUrl":"https://doi.org/10.3758/s13423-024-02637-7","url":null,"abstract":"<p><p>A recent study (Wen et al., Journal of Experimental Psychology: Human Perception and Performance, 50: 934-941, 2024) found no influence of relative word-length on transposed-word effects. However, following the tradition of prior research on effects of transposed words during sentence reading, the transposed words in that study were adjacent words (words at positions 2 and 3 or 3 and 4 in five-word sequences). We surmised that the absence of an influence of relative word-length might be due to word identification being too precise when the two words are located close to eye-fixation location, hence cancelling the impact of more approximate indices of word identity such as word length. We therefore hypothesized that relative word-length might impact on transposed-word effects when the transposition involves non-adjacent words. The present study put this hypothesis to test and found that relative word-length does modify the size of transposed-word effects with non-adjacent transpositions. Transposed-word effects are greater when the transposed words have the same length. Furthermore, a cross-study analysis confirmed that transposed-word effects are greater for adjacent than for non-adjacent transpositions.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143010500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Older adults were found to struggle with tasks that require cognitive control. One task that measures the ability to exert cognitive control is the color-word Stroop task. Almost all studies that tested cognitive control in older adults using the Stroop task have focused on one type of control - Information control. In the present work, we ask whether older adults also show a deficit in another type of cognitive control - Task control. To that end, we tested older and younger adults by isolating and measuring two types of conflict - information conflict and task conflict. Information conflict was measured by the difference between color identification of incongruent color words and color identification of neutral words, while task conflict was measured by the difference between color identification of neutral words and color identification of neutral symbols and by the reverse facilitation effect. We tested how the behavioral markers of these two types of conflicts are affected under low task control conditions, which is essential for measuring task conflict behaviorally. Older adults demonstrated a deficit in information control by showing a larger information conflict marker, but not in task control markers, as no differences in task conflict were found between younger and older adults. These findings supported previous studies that work against theories that link the larger Stroop interference in older adults to a generic slowdown or a generic inhibitory failure. We discussed the relevancy of the results and future research directions in line with other Stroop studies that tested age-related differences in different control mechanisms.
{"title":"Age-related differences in information, but not task control in the color-word Stroop task.","authors":"Eldad Keha, Daniela Aisenberg-Shafran, Shachar Hochman, Eyal Kalanthroff","doi":"10.3758/s13423-024-02631-z","DOIUrl":"https://doi.org/10.3758/s13423-024-02631-z","url":null,"abstract":"<p><p>Older adults were found to struggle with tasks that require cognitive control. One task that measures the ability to exert cognitive control is the color-word Stroop task. Almost all studies that tested cognitive control in older adults using the Stroop task have focused on one type of control - Information control. In the present work, we ask whether older adults also show a deficit in another type of cognitive control - Task control. To that end, we tested older and younger adults by isolating and measuring two types of conflict - information conflict and task conflict. Information conflict was measured by the difference between color identification of incongruent color words and color identification of neutral words, while task conflict was measured by the difference between color identification of neutral words and color identification of neutral symbols and by the reverse facilitation effect. We tested how the behavioral markers of these two types of conflicts are affected under low task control conditions, which is essential for measuring task conflict behaviorally. Older adults demonstrated a deficit in information control by showing a larger information conflict marker, but not in task control markers, as no differences in task conflict were found between younger and older adults. These findings supported previous studies that work against theories that link the larger Stroop interference in older adults to a generic slowdown or a generic inhibitory failure. We discussed the relevancy of the results and future research directions in line with other Stroop studies that tested age-related differences in different control mechanisms.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143010493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-14DOI: 10.3758/s13423-024-02616-y
Li Li, Xuechun Shen, Shuguang Kuai
We examined the intricate mechanisms underlying visual processing of complex motion stimuli by measuring the detection sensitivity to contraction and expansion patterns and the discrimination sensitivity to the location of the center of motion (CoM) in various real and unreal optic flow stimuli. We conducted two experiments (N = 20 each) and compared responses to both "real" optic flow stimuli containing information about self-movement in a three-dimensional scene and "unreal" optic flow stimuli lacking such information. We found that detection sensitivity to contraction surpassed that to expansion patterns for unreal optic flow stimuli, whereas this trend was reversed for real optic flow stimuli. Furthermore, while discrimination sensitivity to the CoM location was not affected by stimulus duration for unreal optic flow stimuli, it showed a significant improvement when stimulus duration increased from 100 to 400 ms for real optic flow stimuli. These findings provide compelling evidence that the visual system employs distinct processing approaches for real versus unreal optic flow even when they are perfectly matched for two-dimensional global features and local motion signals. These differences reveal influences of self-movement in natural environments, enabling the visual system to uniquely process stimuli with significant survival implications.
{"title":"Distinct detection and discrimination sensitivities in visual processing of real versus unreal optic flow.","authors":"Li Li, Xuechun Shen, Shuguang Kuai","doi":"10.3758/s13423-024-02616-y","DOIUrl":"https://doi.org/10.3758/s13423-024-02616-y","url":null,"abstract":"<p><p>We examined the intricate mechanisms underlying visual processing of complex motion stimuli by measuring the detection sensitivity to contraction and expansion patterns and the discrimination sensitivity to the location of the center of motion (CoM) in various real and unreal optic flow stimuli. We conducted two experiments (N = 20 each) and compared responses to both \"real\" optic flow stimuli containing information about self-movement in a three-dimensional scene and \"unreal\" optic flow stimuli lacking such information. We found that detection sensitivity to contraction surpassed that to expansion patterns for unreal optic flow stimuli, whereas this trend was reversed for real optic flow stimuli. Furthermore, while discrimination sensitivity to the CoM location was not affected by stimulus duration for unreal optic flow stimuli, it showed a significant improvement when stimulus duration increased from 100 to 400 ms for real optic flow stimuli. These findings provide compelling evidence that the visual system employs distinct processing approaches for real versus unreal optic flow even when they are perfectly matched for two-dimensional global features and local motion signals. These differences reveal influences of self-movement in natural environments, enabling the visual system to uniquely process stimuli with significant survival implications.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-13DOI: 10.3758/s13423-024-02633-x
Dorit Segal
Visual perspective taking often involves transitioning between perspectives, yet the cognitive mechanisms underlying this process remain unclear. The current study draws on insights from task- and language-switching research to address this gap. In Experiment 1, 79 participants judged the perspective of an avatar positioned in various locations, observing either the rectangular or the square side of a rectangular cube hanging from the ceiling. The avatar's perspective was either consistent or inconsistent with the participant's, and its computation sometimes required mental transformation. The task included both single-position blocks, in which the avatar's location remained fixed across all trials, and mixed-position blocks, in which the avatar's position changed across trials. Performance was compared across trial types and positions. In Experiment 2, 126 participants completed a similar task administered online, with more trials, and performance was compared at various points within the response time distribution (vincentile analysis). Results revealed a robust switching cost. However, mixing costs, which reflect the ability to maintain multiple task sets active in working memory, were absent, even in slower response times. Additionally, responses to the avatar's position varied as a function of consistency with the participants' viewpoint and the angular disparity between them. These findings suggest that perspective switching is costly, people cannot activate multiple perspectives simultaneously, and the computation of other people's visual perspectives varies with cognitive demands.
{"title":"The cost of perspective switching: Constraints on simultaneous activation.","authors":"Dorit Segal","doi":"10.3758/s13423-024-02633-x","DOIUrl":"https://doi.org/10.3758/s13423-024-02633-x","url":null,"abstract":"<p><p>Visual perspective taking often involves transitioning between perspectives, yet the cognitive mechanisms underlying this process remain unclear. The current study draws on insights from task- and language-switching research to address this gap. In Experiment 1, 79 participants judged the perspective of an avatar positioned in various locations, observing either the rectangular or the square side of a rectangular cube hanging from the ceiling. The avatar's perspective was either consistent or inconsistent with the participant's, and its computation sometimes required mental transformation. The task included both single-position blocks, in which the avatar's location remained fixed across all trials, and mixed-position blocks, in which the avatar's position changed across trials. Performance was compared across trial types and positions. In Experiment 2, 126 participants completed a similar task administered online, with more trials, and performance was compared at various points within the response time distribution (vincentile analysis). Results revealed a robust switching cost. However, mixing costs, which reflect the ability to maintain multiple task sets active in working memory, were absent, even in slower response times. Additionally, responses to the avatar's position varied as a function of consistency with the participants' viewpoint and the angular disparity between them. These findings suggest that perspective switching is costly, people cannot activate multiple perspectives simultaneously, and the computation of other people's visual perspectives varies with cognitive demands.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142979863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-13DOI: 10.3758/s13423-024-02615-z
Domicele Jonauskaite, Christine Mohr
Colour is an integral part of natural and constructed environments. For many, it also has an aesthetic appeal, with some colours being more pleasant than others. Moreover, humans seem to systematically and reliably associate colours with emotions, such as yellow with joy, black with sadness, light colours with positive and dark colours with negative emotions. To systematise such colour-emotion correspondences, we identified 132 relevant peer-reviewed articles published in English between 1895 and 2022. These articles covered a total of 42,266 participants from 64 different countries. We found that all basic colour categories had systematic correspondences with affective dimensions (valence, arousal, power) as well as with discrete affective terms (e.g., love, happy, sad, bored). Most correspondences were many-to-many, with systematic effects driven by lightness, saturation, and hue ('colour temperature'). More specifically, (i) LIGHT and DARK colours were associated with positive and negative emotions, respectively; (ii) RED with empowering, high arousal positive and negative emotions; (iii) YELLOW and ORANGE with positive, high arousal emotions; (iv) BLUE, GREEN, GREEN-BLUE, and WHITE with positive, low arousal emotions; (v) PINK with positive emotions; (vi) PURPLE with empowering emotions; (vii) GREY with negative, low arousal emotions; and (viii) BLACK with negative, high arousal emotions. Shared communication needs might explain these consistencies across studies, making colour an excellent medium for communication of emotion. As most colour-emotion correspondences were tested on an abstract level (i.e., associations), it remains to be seen whether such correspondences translate to the impact of colour on experienced emotions and specific contexts.
{"title":"Do we feel colours? A systematic review of 128 years of psychological research linking colours and emotions.","authors":"Domicele Jonauskaite, Christine Mohr","doi":"10.3758/s13423-024-02615-z","DOIUrl":"https://doi.org/10.3758/s13423-024-02615-z","url":null,"abstract":"<p><p>Colour is an integral part of natural and constructed environments. For many, it also has an aesthetic appeal, with some colours being more pleasant than others. Moreover, humans seem to systematically and reliably associate colours with emotions, such as yellow with joy, black with sadness, light colours with positive and dark colours with negative emotions. To systematise such colour-emotion correspondences, we identified 132 relevant peer-reviewed articles published in English between 1895 and 2022. These articles covered a total of 42,266 participants from 64 different countries. We found that all basic colour categories had systematic correspondences with affective dimensions (valence, arousal, power) as well as with discrete affective terms (e.g., love, happy, sad, bored). Most correspondences were many-to-many, with systematic effects driven by lightness, saturation, and hue ('colour temperature'). More specifically, (i) LIGHT and DARK colours were associated with positive and negative emotions, respectively; (ii) RED with empowering, high arousal positive and negative emotions; (iii) YELLOW and ORANGE with positive, high arousal emotions; (iv) BLUE, GREEN, GREEN-BLUE, and WHITE with positive, low arousal emotions; (v) PINK with positive emotions; (vi) PURPLE with empowering emotions; (vii) GREY with negative, low arousal emotions; and (viii) BLACK with negative, high arousal emotions. Shared communication needs might explain these consistencies across studies, making colour an excellent medium for communication of emotion. As most colour-emotion correspondences were tested on an abstract level (i.e., associations), it remains to be seen whether such correspondences translate to the impact of colour on experienced emotions and specific contexts.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142979847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-13DOI: 10.3758/s13423-024-02636-8
Sean Devine, Y Doug Dong, Martin Sellier Silva, Mathieu Roy, A Ross Otto
A growing body of evidence across psychology suggests that (cognitive) effort exertion increases in proximity to a goal state. For instance, previous work has shown that participants respond more quickly, but not less accurately, when they near a goal-as indicated by a filling progress bar. Yet it remains unclear when over the course of a cognitively demanding task do people monitor progress information: Do they continuously monitor their goal progress over the course of a task, or attend more frequently to it as they near their goal? To answer this question, we used eye-tracking to examine trial-by-trial changes in progress monitoring as participants completed blocks of an attentionally demanding oddball task. Replicating past work, we found that participants increased cognitive effort exertion near a goal, as evinced by an increase in correct responses per second. More interestingly, we found that the rate at which participants attended to goal progress information-operationalized here as the frequency of gazes towards a progress bar-increased steeply near a goal state. In other words, participants extracted information from the progress bar at a higher rate when goals were proximal (versus distal). In exploratory analysis of tonic pupil diameter, we also found that tonic pupil size increased sharply as participants approached a goal state, mirroring the pattern of gaze. These results support the view that people attend to progress information more as they approach a goal.
{"title":"Increased attention towards progress information near a goal state.","authors":"Sean Devine, Y Doug Dong, Martin Sellier Silva, Mathieu Roy, A Ross Otto","doi":"10.3758/s13423-024-02636-8","DOIUrl":"https://doi.org/10.3758/s13423-024-02636-8","url":null,"abstract":"<p><p>A growing body of evidence across psychology suggests that (cognitive) effort exertion increases in proximity to a goal state. For instance, previous work has shown that participants respond more quickly, but not less accurately, when they near a goal-as indicated by a filling progress bar. Yet it remains unclear when over the course of a cognitively demanding task do people monitor progress information: Do they continuously monitor their goal progress over the course of a task, or attend more frequently to it as they near their goal? To answer this question, we used eye-tracking to examine trial-by-trial changes in progress monitoring as participants completed blocks of an attentionally demanding oddball task. Replicating past work, we found that participants increased cognitive effort exertion near a goal, as evinced by an increase in correct responses per second. More interestingly, we found that the rate at which participants attended to goal progress information-operationalized here as the frequency of gazes towards a progress bar-increased steeply near a goal state. In other words, participants extracted information from the progress bar at a higher rate when goals were proximal (versus distal). In exploratory analysis of tonic pupil diameter, we also found that tonic pupil size increased sharply as participants approached a goal state, mirroring the pattern of gaze. These results support the view that people attend to progress information more as they approach a goal.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142979855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-09DOI: 10.3758/s13423-024-02635-9
Cheongil Kim, Sang Chul Chong
The performance of the human visual system exhibits moment-to-moment fluctuations influenced by multiple neurocognitive factors. To deal with this instability of the visual system, introspective awareness of current visual performance (metacognitive monitoring) may be crucial. In this study, we investigate whether and how people can monitor their own visual performance during sustained attention by adopting confidence judgments as indicators of metacognitive monitoring - assuming that if participants can monitor visual performance, confidence judgments will accurately track performance fluctuations. In two experiments (N 40), we found that participants were able to monitor fluctuations in visual performance during sustained attention. Importantly, metacognitive monitoring largely relied on the quality of target perception, a product of visual processing ("I lack confidence in my performance because I only caught a glimpse of the target"), rather than the states of the visual system during visual processing ("I lack confidence because I was not focusing on the task").
{"title":"Product, not process: Metacognitive monitoring of visual performance during sustained attention.","authors":"Cheongil Kim, Sang Chul Chong","doi":"10.3758/s13423-024-02635-9","DOIUrl":"https://doi.org/10.3758/s13423-024-02635-9","url":null,"abstract":"<p><p>The performance of the human visual system exhibits moment-to-moment fluctuations influenced by multiple neurocognitive factors. To deal with this instability of the visual system, introspective awareness of current visual performance (metacognitive monitoring) may be crucial. In this study, we investigate whether and how people can monitor their own visual performance during sustained attention by adopting confidence judgments as indicators of metacognitive monitoring - assuming that if participants can monitor visual performance, confidence judgments will accurately track performance fluctuations. In two experiments (N <math><mo>=</mo></math> 40), we found that participants were able to monitor fluctuations in visual performance during sustained attention. Importantly, metacognitive monitoring largely relied on the quality of target perception, a product of visual processing (\"I lack confidence in my performance because I only caught a glimpse of the target\"), rather than the states of the visual system during visual processing (\"I lack confidence because I was not focusing on the task\").</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142953964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-08DOI: 10.3758/s13423-024-02630-0
Andrea Gregor de Varda, Marco Marelli
Auditory iconic words display a phonological profile that imitates their referents' sounds. Traditionally, those words are thought to constitute a minor portion of the auditory lexicon. In this article, we challenge this assumption by assessing the pervasiveness of onomatopoeia in the English auditory vocabulary through a novel data-driven procedure. We embed spoken words and natural sounds into a shared auditory space through (a) a short-time Fourier transform, (b) a convolutional neural network trained to classify sounds, and (c) a network trained on speech recognition. Then, we employ the obtained vector representations to measure their objective auditory resemblance. These similarity indexes show that imitation is not limited to some circumscribed semantic categories, but instead can be considered as a widespread mechanism underlying the structure of the English auditory vocabulary. We finally empirically validate our similarity indexes as measures of iconicity against human judgments.
{"title":"Cracking arbitrariness: A data-driven study of auditory iconicity in spoken English.","authors":"Andrea Gregor de Varda, Marco Marelli","doi":"10.3758/s13423-024-02630-0","DOIUrl":"https://doi.org/10.3758/s13423-024-02630-0","url":null,"abstract":"<p><p>Auditory iconic words display a phonological profile that imitates their referents' sounds. Traditionally, those words are thought to constitute a minor portion of the auditory lexicon. In this article, we challenge this assumption by assessing the pervasiveness of onomatopoeia in the English auditory vocabulary through a novel data-driven procedure. We embed spoken words and natural sounds into a shared auditory space through (a) a short-time Fourier transform, (b) a convolutional neural network trained to classify sounds, and (c) a network trained on speech recognition. Then, we employ the obtained vector representations to measure their objective auditory resemblance. These similarity indexes show that imitation is not limited to some circumscribed semantic categories, but instead can be considered as a widespread mechanism underlying the structure of the English auditory vocabulary. We finally empirically validate our similarity indexes as measures of iconicity against human judgments.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142953962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is striking that visual attention, the process by which attentional resources are allocated in the visual field so as to locally enhance visual perception, is a pervasive component of models of eye movements in reading, but is seldom considered in models of isolated word recognition. We describe BRAID, a new Bayesian word-Recognition model with Attention, Interference and Dynamics. As most of its predecessors, BRAID incorporates three sensory, perceptual, and orthographic knowledge layers together with a lexical membership submodel. Its originality resides in also including three mechanisms that modulate letter identification within strings: an acuity gradient, lateral interference, and visual attention. We calibrated the model such that its temporal scale was consistent with behavioral data, and then explored the model's capacity to generalize to other, independent effects. We evaluated the model's capacity to account for the word length effect in lexical decision, for the optimal viewing position effect, and for the interaction of crowding and frequency effects in word recognition. We further examined how these effects were modulated by variations in the visual attention distribution. We show that visual attention modulates all three effects and that a narrow distribution of visual attention results in performance patterns that mimic those reported in impaired readers. Overall, the BRAID model could be conceived as a core building block, towards the development of integrated models of reading aloud and eye movement control, or of visual recognition of impaired readers, or any context in which visual attention does matter.
{"title":"Visual attention matters during word recognition: A Bayesian modeling approach.","authors":"Thierry Phénix, Émilie Ginestet, Sylviane Valdois, Julien Diard","doi":"10.3758/s13423-024-02591-4","DOIUrl":"https://doi.org/10.3758/s13423-024-02591-4","url":null,"abstract":"<p><p>It is striking that visual attention, the process by which attentional resources are allocated in the visual field so as to locally enhance visual perception, is a pervasive component of models of eye movements in reading, but is seldom considered in models of isolated word recognition. We describe BRAID, a new Bayesian word-Recognition model with Attention, Interference and Dynamics. As most of its predecessors, BRAID incorporates three sensory, perceptual, and orthographic knowledge layers together with a lexical membership submodel. Its originality resides in also including three mechanisms that modulate letter identification within strings: an acuity gradient, lateral interference, and visual attention. We calibrated the model such that its temporal scale was consistent with behavioral data, and then explored the model's capacity to generalize to other, independent effects. We evaluated the model's capacity to account for the word length effect in lexical decision, for the optimal viewing position effect, and for the interaction of crowding and frequency effects in word recognition. We further examined how these effects were modulated by variations in the visual attention distribution. We show that visual attention modulates all three effects and that a narrow distribution of visual attention results in performance patterns that mimic those reported in impaired readers. Overall, the BRAID model could be conceived as a core building block, towards the development of integrated models of reading aloud and eye movement control, or of visual recognition of impaired readers, or any context in which visual attention does matter.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142953965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}