Pub Date : 2024-09-01Epub Date: 2024-05-23DOI: 10.1177/03010066241256221
Robin S S Kramer, Kay L Ritchie, Tessa R Flack, Michael O Mireku, Alex L Jones
Perceiving facial attractiveness is an important behaviour across psychological science due to these judgments having real-world consequences. However, there is little consensus on the measurement of this behaviour, and practices differ widely. Research typically asks participants to provide ratings of attractiveness across a multitude of different response scales, with little consideration of the psychometric properties of these scales. Here, we make psychometric comparisons across nine different response scales. Specifically, we analysed the psychometric properties of a binary response, a 0-100 scale, a visual analogue scale, and a set of Likert scales (1-3, 1-5, 1-7, 1-8, 1-9, 1-10) as tools to measure attractiveness, calculating a range of commonly used statistics for each. While certain properties suggested researchers might choose to favour the 1-5, 1-7 and 1-8 scales, we generally found little evidence of an advantage for one scale over any other. Taken together, our investigation provides consideration of currently used techniques for measuring facial attractiveness and makes recommendations for researchers in this field.
{"title":"The psychometrics of rating facial attractiveness using different response scales.","authors":"Robin S S Kramer, Kay L Ritchie, Tessa R Flack, Michael O Mireku, Alex L Jones","doi":"10.1177/03010066241256221","DOIUrl":"10.1177/03010066241256221","url":null,"abstract":"<p><p>Perceiving facial attractiveness is an important behaviour across psychological science due to these judgments having real-world consequences. However, there is little consensus on the measurement of this behaviour, and practices differ widely. Research typically asks participants to provide ratings of attractiveness across a multitude of different response scales, with little consideration of the psychometric properties of these scales. Here, we make psychometric comparisons across nine different response scales. Specifically, we analysed the psychometric properties of a binary response, a 0-100 scale, a visual analogue scale, and a set of Likert scales (1-3, 1-5, 1-7, 1-8, 1-9, 1-10) as tools to measure attractiveness, calculating a range of commonly used statistics for each. While certain properties suggested researchers might choose to favour the 1-5, 1-7 and 1-8 scales, we generally found little evidence of an advantage for one scale over any other. Taken together, our investigation provides consideration of currently used techniques for measuring facial attractiveness and makes recommendations for researchers in this field.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"645-660"},"PeriodicalIF":1.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11348630/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141080830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-06-11DOI: 10.1177/03010066241258969
Myrthe A Plaisier, Cahelle S J M Vleeshouwers, Nynke Boonstra, Yueying Shi, Sam J I van der Velden, Wouter K Vos, Astrid M L Kappers
Vibrotactile feedback can be built into clothing such as vests. This means that often vibrotactile information is presented to the back. It is known that the back has a relatively low spatial acuity. Spatial acuity varies across different limbs and sometimes with different locations on a limb. These known anisotropies suggest that there might be systematic variations in vibrotactile spatial acuity for different areas of the back and also for different orientations (i.e. horizontal vs. vertical). Here we systematically measured spatial acuity in four areas of the back for both horizontal and vertical orientations. The results show no significant differences in spatial acuity for the back areas that were tested. Spatial acuity was, however, higher in the horizontal direction than in the vertical direction by roughly a factor of two. This means that when designing vibrotactile displays for the back the tactor density can be lower in the vertical direction than in the horizontal direction and density should be constant for different areas of the back.
{"title":"Vibrotactile spatial acuity on the back.","authors":"Myrthe A Plaisier, Cahelle S J M Vleeshouwers, Nynke Boonstra, Yueying Shi, Sam J I van der Velden, Wouter K Vos, Astrid M L Kappers","doi":"10.1177/03010066241258969","DOIUrl":"10.1177/03010066241258969","url":null,"abstract":"<p><p>Vibrotactile feedback can be built into clothing such as vests. This means that often vibrotactile information is presented to the back. It is known that the back has a relatively low spatial acuity. Spatial acuity varies across different limbs and sometimes with different locations on a limb. These known anisotropies suggest that there might be systematic variations in vibrotactile spatial acuity for different areas of the back and also for different orientations (i.e. horizontal vs. vertical). Here we systematically measured spatial acuity in four areas of the back for both horizontal and vertical orientations. The results show no significant differences in spatial acuity for the back areas that were tested. Spatial acuity was, however, higher in the horizontal direction than in the vertical direction by roughly a factor of two. This means that when designing vibrotactile displays for the back the tactor density can be lower in the vertical direction than in the horizontal direction and density should be constant for different areas of the back.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"619-631"},"PeriodicalIF":1.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11348621/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141307203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-06-12DOI: 10.1177/03010066241259729
Ayako Kaneko, Takeshi Atsumi, Masakazu Ide
Researchers have been focusing on perceptual characteristics of autism spectrum disorder (ASD) in terms of sensory hyperreactivity. Previously, we demonstrated that temporal resolution, which is the accuracy to differentiate the order of two successive vibrotactile stimuli, is associated with the severity of sensory hyperreactivity. We currently examined whether an increase in the perceptual intensity of a tactile stimulus, despite its short duration, is derived from high temporal resolution and high frequency of sensory temporal summation. Twenty ASD and 22 typically developing (TD) participants conducted two psychophysical experimental tasks to evaluate detectable duration of vibrotactile stimulus with same amplitude and to evaluate temporal resolution. The sensory hyperreactivity was estimated using self-reported questionnaire. There was no relationship between the temporal resolution and the duration of detectable stimuli in both groups. However, the ASD group showed severe sensory hyperreactivity in daily life than TD group, and the ASD participants with severe sensory hyperreactivity tended to have high temporal resolution, not high sensitivity of detectable duration. Contrary to the hypothesis, there might be different processing between temporal resolution and sensitivity for stimulus detection. We suggested that the atypical temporal processing would affect to sensory reactivity in ASD.
{"title":"Temporal resolution relates to sensory hyperreactivity independently of stimulus detection sensitivity in individuals with autism spectrum disorder.","authors":"Ayako Kaneko, Takeshi Atsumi, Masakazu Ide","doi":"10.1177/03010066241259729","DOIUrl":"10.1177/03010066241259729","url":null,"abstract":"<p><p>Researchers have been focusing on perceptual characteristics of autism spectrum disorder (ASD) in terms of sensory hyperreactivity. Previously, we demonstrated that temporal resolution, which is the accuracy to differentiate the order of two successive vibrotactile stimuli, is associated with the severity of sensory hyperreactivity. We currently examined whether an increase in the perceptual intensity of a tactile stimulus, despite its short duration, is derived from high temporal resolution and high frequency of sensory temporal summation. Twenty ASD and 22 typically developing (TD) participants conducted two psychophysical experimental tasks to evaluate <i>detectable duration</i> of vibrotactile stimulus with same amplitude and to evaluate temporal resolution. The sensory hyperreactivity was estimated using self-reported questionnaire. There was no relationship between the temporal resolution and the duration of detectable stimuli in both groups. However, the ASD group showed severe sensory hyperreactivity in daily life than TD group, and the ASD participants with severe sensory hyperreactivity tended to have high temporal resolution, not high sensitivity of detectable duration. Contrary to the hypothesis, there might be different processing between temporal resolution and sensitivity for stimulus detection. We suggested that the atypical temporal processing would affect to sensory reactivity in ASD.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"585-596"},"PeriodicalIF":1.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141307202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-06-20DOI: 10.1177/03010066241258967
Hellen Kyler, Karin James
Speed of visual object recognition is facilitated after active manual exploration of objects relative to passive visual processing alone. Manual exploration allows viewers to select important information about object structure that may facilitate recognition. Viewpoints where the objects' axis of elongation is perpendicular or parallel to the line of sight are selected more during exploration, recognized faster than other viewpoints, and afford the most information about structure when object movement is controlled by the viewer. Prior work used virtual object exploration in active and passive viewing conditions, limiting multisensory structural object information. Adding multisensory information to encoding may change accuracy of overall recognition, viewpoint selection, and viewpoint recognition. We tested whether the known active advantage for object recognition would change when real objects were studied, affording visual and haptic information. Participants interacted with 3D novel objects during manual exploration or passive viewing of another's object interactions. Object recognition was tested using several viewpoints of rendered objects. We found that manually explored objects were recognized more accurately than objects studied through passive exploration and that recognition of viewpoints differed from previous work.
{"title":"The importance of multisensory-motor learning on subsequent visual recognition.","authors":"Hellen Kyler, Karin James","doi":"10.1177/03010066241258967","DOIUrl":"10.1177/03010066241258967","url":null,"abstract":"<p><p>Speed of visual object recognition is facilitated after active manual exploration of objects relative to passive visual processing alone. Manual exploration allows viewers to select important information about object structure that may facilitate recognition. Viewpoints where the objects' axis of elongation is perpendicular or parallel to the line of sight are selected more during exploration, recognized faster than other viewpoints, and afford the most information about structure when object movement is controlled by the viewer. Prior work used virtual object exploration in active and passive viewing conditions, limiting multisensory structural object information. Adding multisensory information to encoding may change accuracy of overall recognition, viewpoint selection, and viewpoint recognition. We tested whether the known active advantage for object recognition would change when real objects were studied, affording visual and haptic information. Participants interacted with 3D novel objects during manual exploration or passive viewing of another's object interactions. Object recognition was tested using several viewpoints of rendered objects. We found that manually explored objects were recognized more accurately than objects studied through passive exploration and that recognition of viewpoints differed from previous work.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"597-618"},"PeriodicalIF":1.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141428063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-14DOI: 10.1177/03010066241274439
Frans Verstraten
{"title":"Obituary: Wim van de Grind (23 April 1936-14 July 2024).","authors":"Frans Verstraten","doi":"10.1177/03010066241274439","DOIUrl":"https://doi.org/10.1177/03010066241274439","url":null,"abstract":"","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"3010066241274439"},"PeriodicalIF":1.6,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-05-09DOI: 10.1177/03010066241253073
Mengfei Zhao, Jun Wang
Observers can rapidly extract the mean emotion from a set of faces with remarkable precision, known as ensemble coding. Previous studies have demonstrated that matched physical backgrounds improve the precision of ongoing ensemble tasks. However, it remains unknown whether this facilitation effect still occurs when matched social information is perceived from the backgrounds. In two experiments, participants decided whether the test face in the retrieving phase appeared more disgusted or neutral than the mean emotion of the face set in the encoding phase. Both phases were paired with task-irrelevant animated backgrounds, which included either the forward movement trajectory carrying the "cooperatively chasing" information, or the backward movement trajectory conveying no such chasing information. The backgrounds in the encoding and retrieving phases were either mismatched (i.e., forward and backward replays of the same trajectory), or matched (i.e., two identical forward movement trajectories in Experiment 1, or two different forward movement trajectories in Experiment 2). Participants in both experiments showed higher ensemble precisions and better discrimination sensitivities when backgrounds matched. The findings suggest that consistent social information perceived from memory-related context exerts a context-matching facilitation effect on ensemble coding and, more importantly, this effect is independent of consistent physical information.
{"title":"Consistent social information perceived in animated backgrounds improves ensemble perception of facial expressions.","authors":"Mengfei Zhao, Jun Wang","doi":"10.1177/03010066241253073","DOIUrl":"10.1177/03010066241253073","url":null,"abstract":"<p><p>Observers can rapidly extract the mean emotion from a set of faces with remarkable precision, known as ensemble coding. Previous studies have demonstrated that matched physical backgrounds improve the precision of ongoing ensemble tasks. However, it remains unknown whether this facilitation effect still occurs when matched social information is perceived from the backgrounds. In two experiments, participants decided whether the test face in the retrieving phase appeared more disgusted or neutral than the mean emotion of the face set in the encoding phase. Both phases were paired with task-irrelevant animated backgrounds, which included either the forward movement trajectory carrying the \"cooperatively chasing\" information, or the backward movement trajectory conveying no such chasing information. The backgrounds in the encoding and retrieving phases were either mismatched (i.e., forward and backward replays of the same trajectory), or matched (i.e., two identical forward movement trajectories in Experiment 1, or two different forward movement trajectories in Experiment 2). Participants in both experiments showed higher ensemble precisions and better discrimination sensitivities when backgrounds matched. The findings suggest that consistent social information perceived from memory-related context exerts a context-matching facilitation effect on ensemble coding and, more importantly, this effect is independent of consistent physical information.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"563-578"},"PeriodicalIF":1.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140899070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-05-16DOI: 10.1177/03010066241252355
Motohiro Ito, Atsunobu Suzuki
Human and artificial features that coexist in certain types of human-like robots create a discrepancy in perceived humanness and evoke uncanny feelings in human observers. However, whether this perceptual mismatch in humanness occurs for all faces, and whether it is related to the uncanny feelings toward them, is unknown. We investigated this by examining perceived humanness for a variety of natural images of robot and human faces with different spatial frequency (SF) information: that is, faces with only low SF, middle SF, and high SF information, and intact (spatially unfiltered) faces. Uncanny feelings elicited by these faces were also measured. The results showed perceptual mismatches that LSF, MSF, and HSF faces were perceived as more human than intact faces. This was particularly true for intact robot faces that looked slightly human, which tended to evoke strong uncanny feelings. Importantly, the mismatch in perceived humanness between the intact and spatially filtered faces was positively correlated with uncanny feelings toward intact faces. Given that the human visual system performs SF analysis when processing faces, the perceptual mismatches observed in this study likely occur in real life for all faces, and as such might be a ubiquitous source of uncanny feelings in real-life situations.
在某些类型的仿人机器人中,人类和人工特征共存,造成了人类感知上的差异,并唤起人类观察者的不可思议的感觉。然而,是否所有的人脸都会出现这种人性化感知上的不匹配,以及这种不匹配是否与对人脸的怪异感觉有关,目前还不得而知。我们通过研究机器人和人脸的各种自然图像的空间频率(SF)信息(即只有低SF、中SF和高SF信息的人脸,以及完整的(未过滤空间频率的)人脸),对人性化的感知进行了调查。同时还测量了这些人脸所引起的不真实感。结果显示,LSF、MSF 和 HSF 人脸比完整的人脸更容易被认为是人类,从而产生了知觉错配。这一点在看起来略像人类的完整机器人面孔上表现得尤为明显,这些面孔往往会唤起强烈的怪异感觉。重要的是,完整面孔和空间滤波面孔之间的人类感知不匹配与对完整面孔的不正常感觉呈正相关。鉴于人类视觉系统在处理人脸时会进行 SF 分析,本研究中观察到的感知不匹配现象很可能发生在现实生活中的所有人脸中,因此这可能是现实生活中普遍存在的不可思议感觉的来源。
{"title":"Discrepancies in perceived humanness between spatially filtered and unfiltered faces and their associations with uncanny feelings.","authors":"Motohiro Ito, Atsunobu Suzuki","doi":"10.1177/03010066241252355","DOIUrl":"10.1177/03010066241252355","url":null,"abstract":"<p><p>Human and artificial features that coexist in certain types of human-like robots create a discrepancy in perceived humanness and evoke uncanny feelings in human observers. However, whether this perceptual mismatch in humanness occurs for all faces, and whether it is related to the uncanny feelings toward them, is unknown. We investigated this by examining perceived humanness for a variety of natural images of robot and human faces with different spatial frequency (SF) information: that is, faces with only low SF, middle SF, and high SF information, and intact (spatially unfiltered) faces. Uncanny feelings elicited by these faces were also measured. The results showed perceptual mismatches that LSF, MSF, and HSF faces were perceived as more human than intact faces. This was particularly true for intact robot faces that looked slightly human, which tended to evoke strong uncanny feelings. Importantly, the mismatch in perceived humanness between the intact and spatially filtered faces was positively correlated with uncanny feelings toward intact faces. Given that the human visual system performs SF analysis when processing faces, the perceptual mismatches observed in this study likely occur in real life for all faces, and as such might be a ubiquitous source of uncanny feelings in real-life situations.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"529-543"},"PeriodicalIF":1.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140946061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-05-06DOI: 10.1177/03010066241252066
Caitlin A Laycox, Rory Thompson, Jasmine A Haggerty, Arnold J Wilkins, Sarah M Haigh
Flicker and patterns of stripes in the modern environment can evoke visual illusions, discomfort migraine, and seizures. We measured reading speed while striped and less striped texts were illuminated with LED lights. In Experiment 1, the lights flickered at 60 Hz and 120 Hz compared to 60 kHz (perceived as steady light). In Experiment 2, the lights flickered at 60 Hz or 600 Hz (at which frequency the phantom array is most visible), and were compared to continuous light. Two types of text were used: one containing words with high horizontal autocorrelation (striped) and another containing words with low autocorrelation (less striped). We measured the number of illusions participants saw in the Pattern Glare (PG) Test. Overall, reading speed was slowest during the 60 Hz and 600 Hz flicker and was slower when reading the high autocorrelation text. Interestingly, the low PG group showed greater effects of flicker on reading speed than the high PG group, which tended to be slower overall. In addition, reading speed in the high PG group was reduced when the autocorrelation of the text was high. These findings suggest that uncomfortable visual environments reduce reading efficiency, the more so in individuals who are visually sensitive.
{"title":"Flicker and reading speed: Effects on individuals with visual sensitivity.","authors":"Caitlin A Laycox, Rory Thompson, Jasmine A Haggerty, Arnold J Wilkins, Sarah M Haigh","doi":"10.1177/03010066241252066","DOIUrl":"10.1177/03010066241252066","url":null,"abstract":"<p><p>Flicker and patterns of stripes in the modern environment can evoke visual illusions, discomfort migraine, and seizures. We measured reading speed while striped and less striped texts were illuminated with LED lights. In Experiment 1, the lights flickered at 60 Hz and 120 Hz compared to 60 kHz (perceived as steady light). In Experiment 2, the lights flickered at 60 Hz or 600 Hz (at which frequency the phantom array is most visible), and were compared to continuous light. Two types of text were used: one containing words with high horizontal autocorrelation (striped) and another containing words with low autocorrelation (less striped). We measured the number of illusions participants saw in the Pattern Glare (PG) Test. Overall, reading speed was slowest during the 60 Hz and 600 Hz flicker and was slower when reading the high autocorrelation text. Interestingly, the low PG group showed greater effects of flicker on reading speed than the high PG group, which tended to be slower overall. In addition, reading speed in the high PG group was reduced when the autocorrelation of the text was high. These findings suggest that uncomfortable visual environments reduce reading efficiency, the more so in individuals who are visually sensitive.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"512-528"},"PeriodicalIF":1.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140858773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-06-03DOI: 10.1177/03010066241252390
Daphne Roumani, Konstantinos Moutoussis
The way that attention affects the processing of visual information is one of the most intriguing fields in the study of visual perception. One way to examine this interaction is by studying the way perceptual aftereffects are modulated by attention. In the present study, we have manipulated attention during adaptation to translational motion generated by coherently moving random dots, in order to investigate the effect of the distraction of attention on the strength of the peripheral dynamic motion aftereffect (MAE). A foveal rapid serial visual presentation task (RSVP) of varying difficulty was introduced during the adaptation period while the adaptation and test stimuli were presented peripherally. Furthermore, to examine the interaction between the physical characteristics of the stimulus and attention, we have manipulated the motion coherence level of the adaptation stimuli. Our results suggested that the removal of attention through an irrelevant task modulated the MAE's magnitude moderately and that such an effect depends on the stimulus strength. We also showed that the MAE still persists with subthreshold and unattended stimuli, suggesting that perhaps attention is not required for the complete development of the MAE.
注意力如何影响视觉信息的处理是视觉感知研究中最引人入胜的领域之一。研究这种相互作用的一种方法是研究知觉后遗效应受注意力调节的方式。在本研究中,我们操纵了在适应由连贯运动的随机点产生的平移运动过程中的注意力,以研究注意力分散对外围动态运动后遗效应(MAE)强度的影响。在适应期引入了不同难度的眼窝快速连续视觉呈现任务(RSVP),同时在外围呈现适应刺激和测试刺激。此外,为了研究刺激物的物理特性与注意力之间的相互作用,我们还操纵了适应刺激物的运动一致性水平。我们的研究结果表明,通过一项无关任务来消除注意力会适度调节 MAE 的大小,而这种效应取决于刺激强度。我们还发现,在亚阈值和无注意刺激下,MAE 仍然存在,这表明 MAE 的完全形成可能并不需要注意。
{"title":"Inattentional aftereffects: The role of attention on the strength of the motion aftereffect.","authors":"Daphne Roumani, Konstantinos Moutoussis","doi":"10.1177/03010066241252390","DOIUrl":"10.1177/03010066241252390","url":null,"abstract":"<p><p>The way that attention affects the processing of visual information is one of the most intriguing fields in the study of visual perception. One way to examine this interaction is by studying the way perceptual aftereffects are modulated by attention. In the present study, we have manipulated attention during adaptation to translational motion generated by coherently moving random dots, in order to investigate the effect of the distraction of attention on the strength of the peripheral dynamic motion aftereffect (MAE). A foveal rapid serial visual presentation task (RSVP) of varying difficulty was introduced during the adaptation period while the adaptation and test stimuli were presented peripherally. Furthermore, to examine the interaction between the physical characteristics of the stimulus and attention, we have manipulated the motion coherence level of the adaptation stimuli. Our results suggested that the removal of attention through an irrelevant task modulated the MAE's magnitude moderately and that such an effect depends on the stimulus strength. We also showed that the MAE still persists with subthreshold and unattended stimuli, suggesting that perhaps attention is not required for the complete development of the MAE.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"544-562"},"PeriodicalIF":1.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-06-12DOI: 10.1177/03010066241253816
Anna Metzger, Robert John Ennis, Katja Doerschner, Matteo Toscani
We used a simple stimulus, dissociating perceptually relevant information in space, to differentiate between bottom-up and task-driven fixations. Six participants viewed a dynamic scene showing the reaction of an elastic object fixed to the ceiling being hit. In one condition they had to judge the object's stiffness and in the other condition its lightness. The results show that initial fixations tend to land in the centre of an object, independent of the task. After the initial fixation, participants tended to look at task diagnostic regions. This fixation behaviour correlates with high perceptual performance. Similarly, low-latency saccades lead to fixations that do not depend on the task, whereas higher latency does.
{"title":"Perceptual task drives later fixations and long latency saccades, while early fixations and short latency saccades are more automatic.","authors":"Anna Metzger, Robert John Ennis, Katja Doerschner, Matteo Toscani","doi":"10.1177/03010066241253816","DOIUrl":"10.1177/03010066241253816","url":null,"abstract":"<p><p>We used a simple stimulus, dissociating perceptually relevant information in space, to differentiate between bottom-up and task-driven fixations. Six participants viewed a dynamic scene showing the reaction of an elastic object fixed to the ceiling being hit. In one condition they had to judge the object's stiffness and in the other condition its lightness. The results show that initial fixations tend to land in the centre of an object, independent of the task. After the initial fixation, participants tended to look at task diagnostic regions. This fixation behaviour correlates with high perceptual performance. Similarly, low-latency saccades lead to fixations that do not depend on the task, whereas higher latency does.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":" ","pages":"501-511"},"PeriodicalIF":1.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11318208/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141307201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}