Pub Date : 2025-02-21DOI: 10.1177/17470218251317372
Victor Kuperman, Dalmo Buzato, Rui Rothe-Neves
The link between the cognitive effort of word processing and the eye-movement patterns elicited by that word is well established in psycholinguistic research using eye-tracking. Yet less evidence or consensus exists regarding whether the same link exists between linguistic complexity measures of a sentence or passage and eye movements registered at the sentence or passage level. This article focuses on "global" measures of syntactic and lexical complexity, i.e., the measures that characterise the structure of the sentence or passage rather than aggregate lexical properties of individual words. We selected several commonly used global complexity measures and tested their predictive power against sentence- and passage-level eye movements in samples of text reading from 13 languages represented in the Multilingual Eye Movement Corpus (MECO). While some syntactic or lexical complexity measures elicited statistically significant effects, they were negligibly small and not of practical relevance for predicting the processing effort either in individual languages or across languages. These findings suggest that the "eye-mind" link known to be valid at the word level may not scale up to larger linguistic units.
{"title":"Global measures of syntactic and lexical complexity are not strong predictors of eye-movement patterns in sentence and passage reading.","authors":"Victor Kuperman, Dalmo Buzato, Rui Rothe-Neves","doi":"10.1177/17470218251317372","DOIUrl":"10.1177/17470218251317372","url":null,"abstract":"<p><p>The link between the cognitive effort of word processing and the eye-movement patterns elicited by that word is well established in psycholinguistic research using eye-tracking. Yet less evidence or consensus exists regarding whether the same link exists between linguistic complexity measures of a sentence or passage and eye movements registered at the sentence or passage level. This article focuses on \"global\" measures of syntactic and lexical complexity, i.e., the measures that characterise the structure of the sentence or passage rather than aggregate lexical properties of individual words. We selected several commonly used global complexity measures and tested their predictive power against sentence- and passage-level eye movements in samples of text reading from 13 languages represented in the Multilingual Eye Movement Corpus (MECO). While some syntactic or lexical complexity measures elicited statistically significant effects, they were negligibly small and not of practical relevance for predicting the processing effort either in individual languages or across languages. These findings suggest that the \"eye-mind\" link known to be valid at the word level may not scale up to larger linguistic units.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"17470218251317372"},"PeriodicalIF":1.5,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143024581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-21DOI: 10.1177/17470218251325417
Sarah Koch, Torsten Schubert, Sven Blankenberger
Magnitude dimensions influence the processing of each other resulting in shorter reaction times in classification tasks when the magnitude information in both dimensions matches. These effects are often explained by a shared magnitude representation as proposed by A Theory of Magnitude (ATOM). Interactions between numbers and loudness indicate that loudness may also be represented as a magnitude. Three experiments were conducted to investigate loudness-number interactions with regard to cross-modality, automaticity, bidirectionality, and the influence of processing speed. In Experiment 1, participants classified the numerical value of visually presented numbers relative to a preceding standard number. Tones at different loudness levels were presented simultaneously with the target number. In Experiment 2, participants switched between a numerical classification task and a loudness classification task randomly between trials. Experiment 3 was similar to Experiment 1 but with reduced saliency of the auditory dimension. Across all experiments, there was an interaction between loudness and number magnitude, with shorter reaction times for large (small) numbers when they were accompanied by loud (soft) tones compared to soft (loud) tones. In addition, Experiment 2 showed a bidirectional influence as the interaction occurred also in the loudness classification task. The effect of distance on the cross-modal loudness-number interaction only partially occurred, as only the loudness distance had an effect on the interaction, and this effect was mediated by task-relevance. This may reflect an asymmetry in the influence between numbers and loudness. Overall, the findings support the hypothesis that loudness is represented as a magnitude according to ATOM.
{"title":"EXPRESS: Large sounds and loud numbers? Investigating the bidirectionality and automaticity of cross-modal loudness-number interactions.","authors":"Sarah Koch, Torsten Schubert, Sven Blankenberger","doi":"10.1177/17470218251325417","DOIUrl":"https://doi.org/10.1177/17470218251325417","url":null,"abstract":"<p><p>Magnitude dimensions influence the processing of each other resulting in shorter reaction times in classification tasks when the magnitude information in both dimensions matches. These effects are often explained by a shared magnitude representation as proposed by A Theory of Magnitude (ATOM). Interactions between numbers and loudness indicate that loudness may also be represented as a magnitude. Three experiments were conducted to investigate loudness-number interactions with regard to cross-modality, automaticity, bidirectionality, and the influence of processing speed. In Experiment 1, participants classified the numerical value of visually presented numbers relative to a preceding standard number. Tones at different loudness levels were presented simultaneously with the target number. In Experiment 2, participants switched between a numerical classification task and a loudness classification task randomly between trials. Experiment 3 was similar to Experiment 1 but with reduced saliency of the auditory dimension. Across all experiments, there was an interaction between loudness and number magnitude, with shorter reaction times for large (small) numbers when they were accompanied by loud (soft) tones compared to soft (loud) tones. In addition, Experiment 2 showed a bidirectional influence as the interaction occurred also in the loudness classification task. The effect of distance on the cross-modal loudness-number interaction only partially occurred, as only the loudness distance had an effect on the interaction, and this effect was mediated by task-relevance. This may reflect an asymmetry in the influence between numbers and loudness. Overall, the findings support the hypothesis that loudness is represented as a magnitude according to ATOM.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"17470218251325417"},"PeriodicalIF":1.5,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143469069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-20DOI: 10.1177/17470218251325145
Sarah E Colby, Bob McMurray
Word recognition is generally thought to be supported by an automatic process of lexical competition, at least in normal hearing young adults. When listening becomes challenging, either due to properties of the environment (noise) or the individual (hearing loss), the dynamics of lexical competition change and word recognition can feel effortful and fatiguing. In cochlear implant users, several dimensions of lexical competition have been identified that capture the timing of the onset of lexical competition (Wait-and-See), the degree to which competition is fully resolved (Sustained Activation), and how quickly lexical candidates are activated (Activation Rate). It is unclear, however, how these dimensions relate to listening effort. To address this question, a group of cochlear implant users (N=79) completed a pupillometry task to index effort and a visual world paradigm task to index the dynamics of lexical competition as part of a larger battery of clinical and experimental tasks. Listeners who engaged more effort, as indexed by peak pupil size difference score, fell lower along the Wait-and-See dimension, suggesting that these listeners are engaging effort to be less Wait-and-See (or to begin the process of lexical competition earlier). Listeners who engaged effort earlier had better word and sentence recognition outcomes. The timing of effort was predicted by age and spectral fidelity, but no audiological or demographic factors predicted peak pupil size difference. The dissociation between the magnitude of engaged effort and the timing of effort suggests they perform different goals for spoken word recognition.
{"title":"EXPRESS: Engaging effort improves efficiency for spoken word recognition in cochlear implant users.","authors":"Sarah E Colby, Bob McMurray","doi":"10.1177/17470218251325145","DOIUrl":"https://doi.org/10.1177/17470218251325145","url":null,"abstract":"<p><p>Word recognition is generally thought to be supported by an automatic process of lexical competition, at least in normal hearing young adults. When listening becomes challenging, either due to properties of the environment (noise) or the individual (hearing loss), the dynamics of lexical competition change and word recognition can feel effortful and fatiguing. In cochlear implant users, several dimensions of lexical competition have been identified that capture the timing of the onset of lexical competition (Wait-and-See), the degree to which competition is fully resolved (Sustained Activation), and how quickly lexical candidates are activated (Activation Rate). It is unclear, however, how these dimensions relate to listening effort. To address this question, a group of cochlear implant users (N=79) completed a pupillometry task to index effort and a visual world paradigm task to index the dynamics of lexical competition as part of a larger battery of clinical and experimental tasks. Listeners who engaged more effort, as indexed by peak pupil size difference score, fell lower along the Wait-and-See dimension, suggesting that these listeners are engaging effort to be less Wait-and-See (or to begin the process of lexical competition earlier). Listeners who engaged effort earlier had better word and sentence recognition outcomes. The timing of effort was predicted by age and spectral fidelity, but no audiological or demographic factors predicted peak pupil size difference. The dissociation between the magnitude of engaged effort and the timing of effort suggests they perform different goals for spoken word recognition.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"17470218251325145"},"PeriodicalIF":1.5,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143469066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-19DOI: 10.1177/17470218251324168
Emily Frost, Paresh Malhotra, Talya Porat, Katarina Poole, Aarya Menon, Lorenzo Picinali
Introduction: In the dementia field, a number of applications are being developed aimed at boosting functional abilities. There is an interesting gap as to how utilising serious games can further the knowledge on the potential relationship between hearing and cognitive health in mid-life. The aim of this study was to evaluate the auditory-cognitive training application HELIX, against outcome measures for speech-in-noise, cognitive tasks, communication confidence, quality of life and usability.
Methods: A randomised-controlled trial was completed for 43 participants with subjective hearing loss and/or cognitive impairment, over a play period of 4-weeks and a follow-up period of another 4-weeks. Outcome measures included an new online implementation of the Digit-Triplet-Test, a battery of online cognitive tests and quality of life questionnaires. Paired semi-structured interviews and usability measures were completed to assess HELIX's impact on quality of life and usability.
Results: An improvement in the performance of the Digit-Triplet-Test, measured four and eight weeks after the baseline, was found within the training group, however this improvement was not significant between the training and control groups. No significant improvements were found in any other outcome measures. Thematic analysis suggested HELIX prompted realisation of difficulties and actions required, improved listening and positive behaviour change.
Discussion: Employing a participatory design approach has ensured HELIX is relevant and useful for participants that may be at risk of developing age-related hearing loss and cognitive decline. Whilst an improvement in the Digit-Triplet-Test was seen, it is not possible to conclude whether this was as a result of playing HELIX.
{"title":"EXPRESS: HEaring and LIstening eXperience (HELIX): Evaluation of a co-designed serious game for auditory-cognitive training.","authors":"Emily Frost, Paresh Malhotra, Talya Porat, Katarina Poole, Aarya Menon, Lorenzo Picinali","doi":"10.1177/17470218251324168","DOIUrl":"https://doi.org/10.1177/17470218251324168","url":null,"abstract":"<p><strong>Introduction: </strong>In the dementia field, a number of applications are being developed aimed at boosting functional abilities. There is an interesting gap as to how utilising serious games can further the knowledge on the potential relationship between hearing and cognitive health in mid-life. The aim of this study was to evaluate the auditory-cognitive training application HELIX, against outcome measures for speech-in-noise, cognitive tasks, communication confidence, quality of life and usability.</p><p><strong>Methods: </strong>A randomised-controlled trial was completed for 43 participants with subjective hearing loss and/or cognitive impairment, over a play period of 4-weeks and a follow-up period of another 4-weeks. Outcome measures included an new online implementation of the Digit-Triplet-Test, a battery of online cognitive tests and quality of life questionnaires. Paired semi-structured interviews and usability measures were completed to assess HELIX's impact on quality of life and usability.</p><p><strong>Results: </strong>An improvement in the performance of the Digit-Triplet-Test, measured four and eight weeks after the baseline, was found within the training group, however this improvement was not significant between the training and control groups. No significant improvements were found in any other outcome measures. Thematic analysis suggested HELIX prompted realisation of difficulties and actions required, improved listening and positive behaviour change.</p><p><strong>Discussion: </strong>Employing a participatory design approach has ensured HELIX is relevant and useful for participants that may be at risk of developing age-related hearing loss and cognitive decline. Whilst an improvement in the Digit-Triplet-Test was seen, it is not possible to conclude whether this was as a result of playing HELIX.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"17470218251324168"},"PeriodicalIF":1.5,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143459343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-19DOI: 10.1177/17470218251325245
Joris Perra, Benedicte Poulin-Charronnat, Thierry Baccino, Patrick Bard, Philippe Pfister, Philippe Lalitte, Mélissa Zerbib, Véronique Drai-Zerbib
Expertise is associated with a knowledge-driven information processing approach. Experts benefit from long-term knowledge structures-chunks and retrieval structures/templates-leading them to formulate expectations about local stimulus characteristics and to extract information projected onto distant areas from the fixation location. In an attempt to shed light on the way knowledge-driven processing impacts eye movements during music reading, this study aimed to determine how expert musicians deal with local complexity in a sight-reading task. Thirty musicians from two expertise levels had to sight read 4-bar score excerpts. Local analyses were conducted to investigate how the gaze behaves prior to and during the sight reading of different score characteristics, such as alteration, location of the notes on the staff, note count, and heterogeneity of notes. The more experts (i) were less affected by the foveal load induced by local complexity, showing a lower increase in fixation durations between noncomplex features and local complexity compared to the less experts; (ii) presented a saccadic flexibility towards the local complexity projected onto the parafoveal area, being the only group to exhibit shorter progressive incoming saccade sizes on accidentals and larger progressive incoming saccade sizes on new notes compared to noncomplex features; and (iii) presented a visuomotor flexibility depending on the played complexity, being the only group to exhibit a shorter eye-hand span when playing accidentals or distant notes compared to noncomplex features. Overall, this study highlights the usefulness of local analyses as a relevant tool to investigate foveal and parafoveal processing skills during music reading.
{"title":"EXPRESS: How do expert musicians deal with local complexity in a sight-reading task?","authors":"Joris Perra, Benedicte Poulin-Charronnat, Thierry Baccino, Patrick Bard, Philippe Pfister, Philippe Lalitte, Mélissa Zerbib, Véronique Drai-Zerbib","doi":"10.1177/17470218251325245","DOIUrl":"https://doi.org/10.1177/17470218251325245","url":null,"abstract":"<p><p>Expertise is associated with a knowledge-driven information processing approach. Experts benefit from long-term knowledge structures-chunks and retrieval structures/templates-leading them to formulate expectations about local stimulus characteristics and to extract information projected onto distant areas from the fixation location. In an attempt to shed light on the way knowledge-driven processing impacts eye movements during music reading, this study aimed to determine how expert musicians deal with local complexity in a sight-reading task. Thirty musicians from two expertise levels had to sight read 4-bar score excerpts. Local analyses were conducted to investigate how the gaze behaves prior to and during the sight reading of different score characteristics, such as alteration, location of the notes on the staff, note count, and heterogeneity of notes. The more experts (i) were less affected by the foveal load induced by local complexity, showing a lower increase in fixation durations between noncomplex features and local complexity compared to the less experts; (ii) presented a saccadic flexibility towards the local complexity projected onto the parafoveal area, being the only group to exhibit shorter progressive incoming saccade sizes on accidentals and larger progressive incoming saccade sizes on new notes compared to noncomplex features; and (iii) presented a visuomotor flexibility depending on the played complexity, being the only group to exhibit a shorter eye-hand span when playing accidentals or distant notes compared to noncomplex features. Overall, this study highlights the usefulness of local analyses as a relevant tool to investigate foveal and parafoveal processing skills during music reading.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"17470218251325245"},"PeriodicalIF":1.5,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143459346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-18DOI: 10.1177/17470218251316797
Andrew M Burleson, Pamela E Souza
Listeners often find themselves in scenarios where speech is disrupted, misperceived, or otherwise difficult to recognise. In these situations, many individuals report exerting additional effort to understand speech, even when repairing speech may be difficult or impossible. This investigation aimed to characterise cognitive efforts across time during both sentence listening and a post-sentence retention interval by observing the pupillary response of participants with normal to borderline-normal hearing in response to two interrupted speech conditions: sentences interrupted by gaps of silence or bursts of noise. The pupillary response serves as a measure of the cumulative resources devoted to task completion. Both interruption conditions resulted in significantly greater levels of pupil dilation than the uninterrupted speech condition. Just prior to the end of a sentence, trials periodically interrupted by bursts of noise elicited greater pupil dilation than the silent-interrupted condition. Compared to the uninterrupted condition, both interruption conditions resulted in increased dilation after sentence end but before repetition, possibly reflecting sustained processing demands. Understanding pupil dilation as a marker of cognitive effort is important for clinicians and researchers when assessing the additional effort exerted by listeners with hearing loss who may use cochlear implants or hearing aids. Even when successful perceptual repair is unlikely, listeners may continue to exert increased effort when processing misperceived speech, which could cause them to miss upcoming speech or may contribute to heightened listening fatigue.
{"title":"The time course of cognitive effort during disrupted speech.","authors":"Andrew M Burleson, Pamela E Souza","doi":"10.1177/17470218251316797","DOIUrl":"10.1177/17470218251316797","url":null,"abstract":"<p><p>Listeners often find themselves in scenarios where speech is disrupted, misperceived, or otherwise difficult to recognise. In these situations, many individuals report exerting additional effort to understand speech, even when repairing speech may be difficult or impossible. This investigation aimed to characterise cognitive efforts across time during both sentence listening and a post-sentence retention interval by observing the pupillary response of participants with normal to borderline-normal hearing in response to two interrupted speech conditions: sentences interrupted by gaps of silence or bursts of noise. The pupillary response serves as a measure of the cumulative resources devoted to task completion. Both interruption conditions resulted in significantly greater levels of pupil dilation than the uninterrupted speech condition. Just prior to the end of a sentence, trials periodically interrupted by bursts of noise elicited greater pupil dilation than the silent-interrupted condition. Compared to the uninterrupted condition, both interruption conditions resulted in increased dilation after sentence end but before repetition, possibly reflecting sustained processing demands. Understanding pupil dilation as a marker of cognitive effort is important for clinicians and researchers when assessing the additional effort exerted by listeners with hearing loss who may use cochlear implants or hearing aids. Even when successful perceptual repair is unlikely, listeners may continue to exert increased effort when processing misperceived speech, which could cause them to miss upcoming speech or may contribute to heightened listening fatigue.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"17470218251316797"},"PeriodicalIF":1.5,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143010629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-18DOI: 10.1177/17470218251318311
Xiaoye Michael Wang, Cassie Hy Chan, Yiru Wang, April Karlinsky, Merryn D Constable, Timothy N Welsh
The influence of gaze cues on target prioritisation (reaction times [RTs]) and movement execution (movement trajectories) differs based on the ability of the human gaze cue model to manually interact with the targets. Whereas gaze cues consistently impacted RTs, movement trajectories may only be affected when the hands of the human model had the potential to interact with the target. However, the perceived ability to interact with the targets was confounded by the proximity between the model's hands and the targets. The current study explored if the influence of gaze cues on movement trajectories is shaped by the model's potential to access and interact with the targets using their hands or simply the proximity of the hands. A centrally presented human model randomly gazed towards one of two peripheral target locations. Participants executed aiming movements to targets that non-predictively appeared at one location at a stimulus onset asynchrony of 100, 350, or 850 ms. In Experiment 1, the model's hands could not directly access the targets as each was holding a tray. In Experiment 2, the hands had direct access to the targets, but their palms-downwards orientation and wrist-flexed posture rendered efficiently interacting with the targets unlikely. Although RTs showed a facilitation effect of the gaze cue in both experiments, changes in movement trajectories were only observed when the model had direct access to the target (Experiment 2). The results of the current study suggest that the gaze model's direct hand access is necessary for the social gaze cues to influence movement execution.
{"title":"Activation of the motor system following gaze cues is determined by hand access, not hand proximity.","authors":"Xiaoye Michael Wang, Cassie Hy Chan, Yiru Wang, April Karlinsky, Merryn D Constable, Timothy N Welsh","doi":"10.1177/17470218251318311","DOIUrl":"10.1177/17470218251318311","url":null,"abstract":"<p><p>The influence of gaze cues on target prioritisation (reaction times [RTs]) and movement execution (movement trajectories) differs based on the ability of the human gaze cue model to manually interact with the targets. Whereas gaze cues consistently impacted RTs, movement trajectories may only be affected when the hands of the human model had the potential to interact with the target. However, the perceived ability to interact with the targets was confounded by the proximity between the model's hands and the targets. The current study explored if the influence of gaze cues on movement trajectories is shaped by the model's potential to access and interact with the targets using their hands or simply the proximity of the hands. A centrally presented human model randomly gazed towards one of two peripheral target locations. Participants executed aiming movements to targets that non-predictively appeared at one location at a stimulus onset asynchrony of 100, 350, or 850 ms. In Experiment 1, the model's hands could not directly access the targets as each was holding a tray. In Experiment 2, the hands had direct access to the targets, but their palms-downwards orientation and wrist-flexed posture rendered efficiently interacting with the targets unlikely. Although RTs showed a facilitation effect of the gaze cue in both experiments, changes in movement trajectories were only observed when the model had direct access to the target (Experiment 2). The results of the current study suggest that the gaze model's direct hand access is necessary for the social gaze cues to influence movement execution.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"17470218251318311"},"PeriodicalIF":1.5,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143029355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-18DOI: 10.1177/17470218251324932
Myrto Efstathiou, Louise S Delicato, Anna Sedda
Mental representations guide action planning and body execution. While hand representations have been extensively studied, not much is known about differences between hands, feet and whole-body representations. Previous studies tell us about functional and sensory differences between body parts and between parts and whole body, however hands and feet studies also tell us that it matters if we are aware of using motor strategies when we activate body representations, and this has not been compared yet between body parts and between parts and whole body. Sixty participants (M = 26.68, SD = 8.22) took part in an online experiment, including Implicit Association Tests (IAT) where participants are not fully aware of using a motor strategy, and a Mental Motor Chronometry (MMC), a more explicit task requiring awareness of imagining actions. The influence of visual imagery was controlled by administering a Vividness of Visual Imagery (VVI) questionnaire to exclude non-motor-related effects. Results show that when the task requires less awareness to be solved, there are no differences between hands, feet, and whole body. Differences are found when more awareness of body representation and related processes is required, with a more pronounced and finer representation of hands than the whole body. No differences between hands versus feet and whole body versus feet were found. These results highlight the importance of awareness in the representation of body parts and suggest that motor strategies contribute to the differentiation between hand and whole-body representations, a distinction not accounted for by visual imagery differences.
{"title":"EXPRESS: Hands representation is more fine graded and more pronounced than whole-body only if we are aware of using a motor strategy.","authors":"Myrto Efstathiou, Louise S Delicato, Anna Sedda","doi":"10.1177/17470218251324932","DOIUrl":"https://doi.org/10.1177/17470218251324932","url":null,"abstract":"<p><p>Mental representations guide action planning and body execution. While hand representations have been extensively studied, not much is known about differences between hands, feet and whole-body representations. Previous studies tell us about functional and sensory differences between body parts and between parts and whole body, however hands and feet studies also tell us that it matters if we are aware of using motor strategies when we activate body representations, and this has not been compared yet between body parts and between parts and whole body. Sixty participants (M = 26.68, SD = 8.22) took part in an online experiment, including Implicit Association Tests (IAT) where participants are not fully aware of using a motor strategy, and a Mental Motor Chronometry (MMC), a more explicit task requiring awareness of imagining actions. The influence of visual imagery was controlled by administering a Vividness of Visual Imagery (VVI) questionnaire to exclude non-motor-related effects. Results show that when the task requires less awareness to be solved, there are no differences between hands, feet, and whole body. Differences are found when more awareness of body representation and related processes is required, with a more pronounced and finer representation of hands than the whole body. No differences between hands versus feet and whole body versus feet were found. These results highlight the importance of awareness in the representation of body parts and suggest that motor strategies contribute to the differentiation between hand and whole-body representations, a distinction not accounted for by visual imagery differences.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"17470218251324932"},"PeriodicalIF":1.5,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143441733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-17DOI: 10.1177/17470218251324437
Lilly Roth, Julia F Huber, Sophia Kronenthaler, Jean-Philippe van Dijck, Krzysztof Cipora, Martin V Butz, Hans-Christoph Nuerk
Many studies have demonstrated spatial-numerical associations, but the debate about their origin is still ongoing. Some approaches consider cardinality representations in long-term memory, such as a Mental Number Line, while others suggest ordinality representations, for both numerical and non-numerical stimuli, originating in working or long-term memory. To investigate how long-term memory and working memory influence spatial associations and to disentangle the role of cardinality and ordinality, we ran three preregistered online experiments (N = 515). We assessed spatial response preferences for letters (which only convey ordinal but no cardinal information, in contrast to numbers) in a bimanual go/no go consonant-vowel classification task. Experiment 1 ('no-go' trials: non-letter symbols) validated our setup. In Experiments 2 and 3, participants learned an ordinal letter sequence prior to the task, which they recalled afterwards. In Experiment 2, this sequence was merely to be maintained ('no-go' trials: non-letter symbols), whereas in Experiment 3, it needed to be retrieved during the task ('no-go' trials: letters outside the sequence). We replicated letter-space associations based on the alphabet stored in long-term memory (i.e., letters earlier/later in the alphabet associated with left/right, respectively) in all experiments. However, letter-space associations based on the working memory sequence (i.e., letters earlier/later in the sequence associated with left/right, respectively) were only detected in Experiment 3, where retrieval occurred during the task. Spatial short- and long-term associations of letters therefore seem to coexist. These findings support a hybrid model that incorporates both short- and long-term representations, which applies similarly to letters as to numbers.
{"title":"EXPRESS: Looks like SNARC spirit: Coexistence of short- and long-term associations between letters and space.","authors":"Lilly Roth, Julia F Huber, Sophia Kronenthaler, Jean-Philippe van Dijck, Krzysztof Cipora, Martin V Butz, Hans-Christoph Nuerk","doi":"10.1177/17470218251324437","DOIUrl":"https://doi.org/10.1177/17470218251324437","url":null,"abstract":"<p><p>Many studies have demonstrated spatial-numerical associations, but the debate about their origin is still ongoing. Some approaches consider cardinality representations in long-term memory, such as a Mental Number Line, while others suggest ordinality representations, for both numerical and non-numerical stimuli, originating in working or long-term memory. To investigate how long-term memory and working memory influence spatial associations and to disentangle the role of cardinality and ordinality, we ran three preregistered online experiments (N = 515). We assessed spatial response preferences for letters (which only convey ordinal but no cardinal information, in contrast to numbers) in a bimanual go/no go consonant-vowel classification task. Experiment 1 ('no-go' trials: non-letter symbols) validated our setup. In Experiments 2 and 3, participants learned an ordinal letter sequence prior to the task, which they recalled afterwards. In Experiment 2, this sequence was merely to be maintained ('no-go' trials: non-letter symbols), whereas in Experiment 3, it needed to be retrieved during the task ('no-go' trials: letters outside the sequence). We replicated letter-space associations based on the alphabet stored in long-term memory (i.e., letters earlier/later in the alphabet associated with left/right, respectively) in all experiments. However, letter-space associations based on the working memory sequence (i.e., letters earlier/later in the sequence associated with left/right, respectively) were only detected in Experiment 3, where retrieval occurred during the task. Spatial short- and long-term associations of letters therefore seem to coexist. These findings support a hybrid model that incorporates both short- and long-term representations, which applies similarly to letters as to numbers.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"17470218251324437"},"PeriodicalIF":1.5,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143441736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-14DOI: 10.1177/17470218251317191
Tom Mercer
Proactive interference occurs when older memories interfere with current information processing and retrieval. It is often explained with reference to familiarity, where the reappearance of highly familiar items from the recent past produces more disruption than older, less familiar items. However, there are other forms of familiarity beyond recency that may be important, and these were explored in a verbal recent-probes task. Participants viewed eight targets per trial and then determined whether a probe matched any of those targets. Probes matching a target from the previous trial, rather than an earlier trial, led to more errors, revealing proactive interference. However, this effect was influenced by experimental familiarity (whether stimuli were repeated or unique) and pre-experimental familiarity (whether stimuli were meaningful words or meaningless non-words). Specifically, proactive interference was strongest for repeated non-words, and smallest for unique non-words, but stimulus repetition had little impact for words. In addition, the time separating trials (temporal familiarity) was unrelated to proactive interference. The present findings revealed more complex effects of familiarity than have previously been assumed. To understand proactive interference in a working memory task, it is necessary to consider the role of long-term memory via experimental and pre-experimental stimulus familiarity.
{"title":"Familiarity influences on proactive interference in verbal memory.","authors":"Tom Mercer","doi":"10.1177/17470218251317191","DOIUrl":"10.1177/17470218251317191","url":null,"abstract":"<p><p>Proactive interference occurs when older memories interfere with current information processing and retrieval. It is often explained with reference to familiarity, where the reappearance of highly familiar items from the recent past produces more disruption than older, less familiar items. However, there are other forms of familiarity beyond recency that may be important, and these were explored in a verbal recent-probes task. Participants viewed eight targets per trial and then determined whether a probe matched any of those targets. Probes matching a target from the previous trial, rather than an earlier trial, led to more errors, revealing proactive interference. However, this effect was influenced by experimental familiarity (whether stimuli were repeated or unique) and pre-experimental familiarity (whether stimuli were meaningful words or meaningless non-words). Specifically, proactive interference was strongest for repeated non-words, and smallest for unique non-words, but stimulus repetition had little impact for words. In addition, the time separating trials (temporal familiarity) was unrelated to proactive interference. The present findings revealed more complex effects of familiarity than have previously been assumed. To understand proactive interference in a working memory task, it is necessary to consider the role of long-term memory via experimental and pre-experimental stimulus familiarity.</p>","PeriodicalId":20869,"journal":{"name":"Quarterly Journal of Experimental Psychology","volume":" ","pages":"17470218251317191"},"PeriodicalIF":1.5,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143024549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}