Austin J Hurst, Michael A Lawrence, Raymond M Klein
Existing research has found that spatial attention alters how various stimulus properties are perceived (e.g., luminance, saturation), but few have explored whether it improves the accuracy of perception. To address this question, we performed two experiments using modified Posner cueing tasks, wherein participants made speeded detection responses to peripheral colour targets and then indicated their perceived colours on a colour wheel. In E1, cues were central and endogenous (i.e., prompted voluntary attention) and the interval between cues and targets (stimulus onset asynchrony, or SOA) was always 800 ms. In E2, cues were peripheral and exogenous (i.e., captured attention involuntarily) and the SOA varied between short (100 ms) and long (800 ms). A Bayesian mixed-model analysis was used to isolate the effects of attention on the probability and the fidelity of colour encoding. Both endogenous and short-SOA exogenous spatial cueing improved the probability of encoding the colour of targets. Improved fidelity of encoding was observed in the endogenous but not in the exogenous cueing paradigm. With exogenous cues, inhibition of return (IOR) was observed in both RT and probability at the long SOA. Overall, our findings reinforce the utility of continuous response variables in the research of attention.
{"title":"How Does Spatial Attention Influence the Probability and Fidelity of Colour Perception?","authors":"Austin J Hurst, Michael A Lawrence, Raymond M Klein","doi":"10.3390/vision3020031","DOIUrl":"https://doi.org/10.3390/vision3020031","url":null,"abstract":"<p><p>Existing research has found that spatial attention alters how various stimulus properties are perceived (e.g., luminance, saturation), but few have explored whether it improves the accuracy of perception. To address this question, we performed two experiments using modified Posner cueing tasks, wherein participants made speeded detection responses to peripheral colour targets and then indicated their perceived colours on a colour wheel. In E1, cues were central and endogenous (i.e., prompted voluntary attention) and the interval between cues and targets (stimulus onset asynchrony, or SOA) was always 800 ms. In E2, cues were peripheral and exogenous (i.e., captured attention involuntarily) and the SOA varied between short (100 ms) and long (800 ms). A Bayesian mixed-model analysis was used to isolate the effects of attention on the probability and the fidelity of colour encoding. Both endogenous and short-SOA exogenous spatial cueing improved the probability of encoding the colour of targets. Improved fidelity of encoding was observed in the endogenous but not in the exogenous cueing paradigm. With exogenous cues, inhibition of return (IOR) was observed in both RT and probability at the long SOA. Overall, our findings reinforce the utility of continuous response variables in the research of attention.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3020031","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Humans spontaneously attend to social cues like faces and eyes. However, recent data show that this behavior is significantly weakened when visual content, such as luminance and configuration of internal features, as well as visual context, such as background and facial expression, are controlled. Here, we investigated attentional biasing elicited in response to information presented within appropriate background contexts. Using a dot-probe task, participants were presented with a face-house cue pair, with a person sitting in a room and a house positioned within a picture hanging on a wall. A response target occurred at the previous location of the eyes, mouth, top of the house, or bottom of the house. Experiment 1 measured covert attention by assessing manual responses while participants maintained central fixation. Experiment 2 measured overt attention by assessing eye movements using an eye tracker. The data from both experiments indicated no evidence of spontaneous attentional biasing towards faces or facial features in manual responses; however, an infrequent, though reliable, overt bias towards the eyes of faces emerged. Together, these findings suggest that contextually-based social information does not determine spontaneous social attentional biasing in manual measures, although it may act to facilitate oculomotor behavior.
{"title":"Contextually-Based Social Attention Diverges across Covert and Overt Measures.","authors":"Effie J Pereira, Elina Birmingham, Jelena Ristic","doi":"10.3390/vision3020029","DOIUrl":"https://doi.org/10.3390/vision3020029","url":null,"abstract":"<p><p>Humans spontaneously attend to social cues like faces and eyes. However, recent data show that this behavior is significantly weakened when visual content, such as luminance and configuration of internal features, as well as visual context, such as background and facial expression, are controlled. Here, we investigated attentional biasing elicited in response to information presented within appropriate background contexts. Using a dot-probe task, participants were presented with a face-house cue pair, with a person sitting in a room and a house positioned within a picture hanging on a wall. A response target occurred at the previous location of the eyes, mouth, top of the house, or bottom of the house. Experiment 1 measured covert attention by assessing manual responses while participants maintained central fixation. Experiment 2 measured overt attention by assessing eye movements using an eye tracker. The data from both experiments indicated no evidence of spontaneous attentional biasing towards faces or facial features in manual responses; however, an infrequent, though reliable, overt bias towards the eyes of faces emerged. Together, these findings suggest that contextually-based social information does not determine spontaneous social attentional biasing in manual measures, although it may act to facilitate oculomotor behavior.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3020029","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Navid Mohaghegh, Ebrahim Ghafar-Zadeh, Sebastian Magierowski
Recent advances of computerized graphical methods have received significant attention for detection and home monitoring of various visual distortions caused by macular disorders such as macular edema, central serous chorioretinopathy, and age-related macular degeneration. After a brief review of macular disorders and their conventional diagnostic methods, this paper reviews such graphical interface methods including computerized Amsler Grid, Preferential Hyperacuity Perimeter, and Three-dimensional Computer-automated Threshold Amsler Grid. Thereafter, the challenges of these computerized methods for accurate and rapid detection of macular disorders are discussed. The early detection and progress assessment of macular disorders can significantly enhance the required clinical procedure for the diagnosis and treatment of macular disorders.
{"title":"Recent Advances of Computerized Graphical Methods for the Detection and Progress Assessment of Visual Distortion Caused by Macular Disorders.","authors":"Navid Mohaghegh, Ebrahim Ghafar-Zadeh, Sebastian Magierowski","doi":"10.3390/vision3020025","DOIUrl":"https://doi.org/10.3390/vision3020025","url":null,"abstract":"<p><p>Recent advances of computerized graphical methods have received significant attention for detection and home monitoring of various visual distortions caused by macular disorders such as macular edema, central serous chorioretinopathy, and age-related macular degeneration. After a brief review of macular disorders and their conventional diagnostic methods, this paper reviews such graphical interface methods including computerized Amsler Grid, Preferential Hyperacuity Perimeter, and Three-dimensional Computer-automated Threshold Amsler Grid. Thereafter, the challenges of these computerized methods for accurate and rapid detection of macular disorders are discussed. The early detection and progress assessment of macular disorders can significantly enhance the required clinical procedure for the diagnosis and treatment of macular disorders.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3020025","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Processing of both a word's orthography (its printed form) and phonology (its associated speech sounds) are critical for lexical identification during reading, both in beginning and skilled readers. Theories of learning to read typically posit a developmental change, from early readers' reliance on phonology to more skilled readers' development of direct orthographic-semantic links. Specifically, in becoming a skilled reader, the extent to which an individual processes phonology during lexical identification is thought to decrease. Recent data from eye movement research suggests, however, that the developmental change in phonological processing is somewhat more nuanced than this. Such studies show that phonology influences lexical identification in beginning and skilled readers in both typically and atypically developing populations. These data indicate, therefore, that the developmental change might better be characterised as a transition from overt decoding to abstract, covert recoding. We do not stop processing phonology as we become more skilled at reading; rather, the nature of that processing changes.
{"title":"The Changing Role of Phonology in Reading Development.","authors":"Sara V Milledge, Hazel I Blythe","doi":"10.3390/vision3020023","DOIUrl":"https://doi.org/10.3390/vision3020023","url":null,"abstract":"<p><p>Processing of both a word's orthography (its printed form) and phonology (its associated speech sounds) are critical for lexical identification during reading, both in beginning and skilled readers. Theories of learning to read typically posit a developmental change, from early readers' reliance on phonology to more skilled readers' development of direct orthographic-semantic links. Specifically, in becoming a skilled reader, the extent to which an individual processes phonology during lexical identification is thought to decrease. Recent data from eye movement research suggests, however, that the developmental change in phonological processing is somewhat more nuanced than this. Such studies show that phonology influences lexical identification in beginning and skilled readers in both typically and atypically developing populations. These data indicate, therefore, that the developmental change might better be characterised as a transition from overt decoding to abstract, covert recoding. We do not stop processing phonology as we become more skilled at reading; rather, the nature of that processing changes.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3020023","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autism spectrum disorder (ASD) is neurodevelopmental condition principally characterised by impairments in social interaction and communication, and repetitive behaviours and interests. This article reviews the eye movement studies designed to investigate the underlying sampling or processing differences that might account for the principal characteristics of autism. Following a brief summary of a previous review chapter by one of the authors of the current paper, a detailed review of eye movement studies investigating various aspects of processing in autism over the last decade will be presented. The literature will be organised into sections covering different cognitive components, including language and social communication and interaction studies. The aim of the review will be to show how eye movement studies provide a very useful on-line processing measure, allowing us to account for observed differences in behavioural data (accuracy and reaction times). The subtle processing differences that eye movement data reveal in both language and social processing have the potential to impact in the everyday communication domain in autism.
{"title":"What Can Eye Movements Tell Us about Subtle Cognitive Processing Differences in Autism?","authors":"Philippa L Howard, Li Zhang, Valerie Benson","doi":"10.3390/vision3020022","DOIUrl":"https://doi.org/10.3390/vision3020022","url":null,"abstract":"<p><p>Autism spectrum disorder (ASD) is neurodevelopmental condition principally characterised by impairments in social interaction and communication, and repetitive behaviours and interests. This article reviews the eye movement studies designed to investigate the underlying sampling or processing differences that might account for the principal characteristics of autism. Following a brief summary of a previous review chapter by one of the authors of the current paper, a detailed review of eye movement studies investigating various aspects of processing in autism over the last decade will be presented. The literature will be organised into sections covering different cognitive components, including language and social communication and interaction studies. The aim of the review will be to show how eye movement studies provide a very useful on-line processing measure, allowing us to account for observed differences in behavioural data (accuracy and reaction times). The subtle processing differences that eye movement data reveal in both language and social processing have the potential to impact in the everyday communication domain in autism.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3020022","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eye movements support memory encoding by binding distinct elements of the visual world into coherent representations. However, the role of eye movements in memory retrieval is less clear. We propose that eye movements play a functional role in retrieval by reinstating the encoding context. By overtly shifting attention in a manner that broadly recapitulates the spatial locations and temporal order of encoded content, eye movements facilitate access to, and reactivation of, associated details. Such mnemonic gaze reinstatement may be obligatorily recruited when task demands exceed cognitive resources, as is often observed in older adults. We review research linking gaze reinstatement to retrieval, describe the neural integration between the oculomotor and memory systems, and discuss implications for models of oculomotor control, memory, and aging.
{"title":"Eye Movements Actively Reinstate Spatiotemporal Mnemonic Content.","authors":"Jordana S Wynn, Kelly Shen, Jennifer D Ryan","doi":"10.3390/vision3020021","DOIUrl":"https://doi.org/10.3390/vision3020021","url":null,"abstract":"<p><p>Eye movements support memory encoding by binding distinct elements of the visual world into coherent representations. However, the role of eye movements in memory retrieval is less clear. We propose that eye movements play a functional role in retrieval by reinstating the encoding context. By overtly shifting attention in a manner that broadly recapitulates the spatial locations and temporal order of encoded content, eye movements facilitate access to, and reactivation of, associated details. Such mnemonic gaze reinstatement may be obligatorily recruited when task demands exceed cognitive resources, as is often observed in older adults. We review research linking gaze reinstatement to retrieval, describe the neural integration between the oculomotor and memory systems, and discuss implications for models of oculomotor control, memory, and aging.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3020021","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John M Henderson, Taylor R Hayes, Candace E Peacock, Gwendolyn Rehrig
Perception of a complex visual scene requires that important regions be prioritized and attentionally selected for processing. What is the basis for this selection? Although much research has focused on image salience as an important factor guiding attention, relatively little work has focused on semantic salience. To address this imbalance, we have recently developed a new method for measuring, representing, and evaluating the role of meaning in scenes. In this method, the spatial distribution of semantic features in a scene is represented as a meaning map. Meaning maps are generated from crowd-sourced responses given by naïve subjects who rate the meaningfulness of a large number of scene patches drawn from each scene. Meaning maps are coded in the same format as traditional image saliency maps, and therefore both types of maps can be directly evaluated against each other and against maps of the spatial distribution of attention derived from viewers' eye fixations. In this review we describe our work focusing on comparing the influences of meaning and image salience on attentional guidance in real-world scenes across a variety of viewing tasks that we have investigated, including memorization, aesthetic judgment, scene description, and saliency search and judgment. Overall, we have found that both meaning and salience predict the spatial distribution of attention in a scene, but that when the correlation between meaning and salience is statistically controlled, only meaning uniquely accounts for variance in attention.
{"title":"Meaning and Attentional Guidance in Scenes: A Review of the Meaning Map Approach.","authors":"John M Henderson, Taylor R Hayes, Candace E Peacock, Gwendolyn Rehrig","doi":"10.3390/vision3020019","DOIUrl":"https://doi.org/10.3390/vision3020019","url":null,"abstract":"<p><p>Perception of a complex visual scene requires that important regions be prioritized and attentionally selected for processing. What is the basis for this selection? Although much research has focused on image salience as an important factor guiding attention, relatively little work has focused on semantic salience. To address this imbalance, we have recently developed a new method for measuring, representing, and evaluating the role of meaning in scenes. In this method, the spatial distribution of semantic features in a scene is represented as a meaning map. Meaning maps are generated from crowd-sourced responses given by naïve subjects who rate the meaningfulness of a large number of scene patches drawn from each scene. Meaning maps are coded in the same format as traditional image saliency maps, and therefore both types of maps can be directly evaluated against each other and against maps of the spatial distribution of attention derived from viewers' eye fixations. In this review we describe our work focusing on comparing the influences of meaning and image salience on attentional guidance in real-world scenes across a variety of viewing tasks that we have investigated, including memorization, aesthetic judgment, scene description, and saliency search and judgment. Overall, we have found that both meaning and salience predict the spatial distribution of attention in a scene, but that when the correlation between meaning and salience is statistically controlled, only meaning uniquely accounts for variance in attention.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3020019","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Attention is classically classified according to mode of engagement into voluntary and reflexive, and type of operation into covert and overt. The first distinguishes whether attention is elicited intentionally or by unexpected events; the second, whether attention is directed with or without eye movements. Recently, this taxonomy has been expanded to include automated orienting engaged by overlearned symbols and combined attention engaged by a combination of several modes of function. However, so far, combined effects were demonstrated in covert conditions only, and, thus, here we examined if attentional modes combined in overt responses as well. To do so, we elicited automated, voluntary, and combined orienting in covert, i.e., when participants responded manually and maintained central fixation, and overt cases, i.e., when they responded by looking. The data indicated typical effects for automated and voluntary conditions in both covert and overt data, with the magnitudes of the combined effect larger than the magnitude of each mode alone as well as their additive sum. No differences in the combined effects emerged across covert and overt conditions. As such, these results show that attentional systems combine similarly in covert and overt responses and highlight attention's dynamic flexibility in facilitating human behavior.
{"title":"Attention Combines Similarly in Covert and Overt Conditions.","authors":"Christopher D Blair, Jelena Ristic","doi":"10.3390/vision3020016","DOIUrl":"https://doi.org/10.3390/vision3020016","url":null,"abstract":"<p><p>Attention is classically classified according to mode of engagement into voluntary and reflexive, and type of operation into covert and overt. The first distinguishes whether attention is elicited intentionally or by unexpected events; the second, whether attention is directed with or without eye movements. Recently, this taxonomy has been expanded to include automated orienting engaged by overlearned symbols and combined attention engaged by a combination of several modes of function. However, so far, combined effects were demonstrated in covert conditions only, and, thus, here we examined if attentional modes combined in overt responses as well. To do so, we elicited automated, voluntary, and combined orienting in covert, i.e., when participants responded manually and maintained central fixation, and overt cases, i.e., when they responded by looking. The data indicated typical effects for automated and voluntary conditions in both covert and overt data, with the magnitudes of the combined effect larger than the magnitude of each mode alone as well as their additive sum. No differences in the combined effects emerged across covert and overt conditions. As such, these results show that attentional systems combine similarly in covert and overt responses and highlight attention's dynamic flexibility in facilitating human behavior.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3020016","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Geometric differences between the images seen by each eye enable the perception of depth. Additionally, depth is produced in the absence of geometric disparities with binocular disparities in either the average luminance or contrast, which is known as the Venetian blind effect. The temporal dynamics of the Venetian blind effect are much slower (1.3 Hz) than those for geometric binocular disparities (4-5 Hz). Sine-wave modulations of luminance and contrast disparity, however, can be discriminated from square-wave modulations at 1 Hz, which suggests a non-linearity. To measure this non-linearity, a luminance or contrast disparity modulation was presented at a particular frequency and paired with a geometric disparity modulation that cancelled the perceived rotation induced by the luminance or contrast modulation. Phases between the luminance or contrast and the geometric modulation varied in 50 ms increments from -200 and 200 ms. When phases were aligned, observers perceived little or no rotation. When not aligned, a perceived rotation was induced by a contrast or luminance disparity that was then cancelled by the geometric disparity. This causes the perception of a slight jump. The Generalized Difference Model, which is linear in time, predicted a minimal probability in cases when luminance or contrast disparities occurred before the geometric disparities due to the slower dynamics of the Venetian blind effect. The Gated Generalized Difference Model, which is non-linear in time, predicted a minimal probability for offsets of 0 ms. Results followed the Gated model, which further suggests a non-linearity in time for the Venetian blind effect.
{"title":"Dynamic Cancellation of Perceived Rotation from the Venetian Blind Effect.","authors":"Joshua J Dobias, Wm Wren Stine","doi":"10.3390/vision3020014","DOIUrl":"https://doi.org/10.3390/vision3020014","url":null,"abstract":"<p><p>Geometric differences between the images seen by each eye enable the perception of depth. Additionally, depth is produced in the absence of geometric disparities with binocular disparities in either the average luminance or contrast, which is known as the Venetian blind effect. The temporal dynamics of the Venetian blind effect are much slower (1.3 Hz) than those for geometric binocular disparities (4-5 Hz). Sine-wave modulations of luminance and contrast disparity, however, can be discriminated from square-wave modulations at 1 Hz, which suggests a non-linearity. To measure this non-linearity, a luminance or contrast disparity modulation was presented at a particular frequency and paired with a geometric disparity modulation that cancelled the perceived rotation induced by the luminance or contrast modulation. Phases between the luminance or contrast and the geometric modulation varied in 50 ms increments from -200 and 200 ms. When phases were aligned, observers perceived little or no rotation. When not aligned, a perceived rotation was induced by a contrast or luminance disparity that was then cancelled by the geometric disparity. This causes the perception of a slight jump. The Generalized Difference Model, which is linear in time, predicted a minimal probability in cases when luminance or contrast disparities occurred before the geometric disparities due to the slower dynamics of the Venetian blind effect. The Gated Generalized Difference Model, which is non-linear in time, predicted a minimal probability for offsets of 0 ms. Results followed the Gated model, which further suggests a non-linearity in time for the Venetian blind effect.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3020014","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual tests can be used as noninvasive tools to test models of the pathophysiology underlying neurological conditions, such as migraine. They may also be used to track changes in performance that vary with the migraine cycle or can track the efficacy of prophylactic treatments. This article reviews the literature on performance differences on two visual tasks, global motion discrimination and orientation, which, of the many visual tasks that have been used to compare differences between migraine and control groups, have yielded the most consistent patterns of group differences. The implications for understanding the underlying pathophysiology in migraine are discussed, but the main focus is on bringing together disparate areas of research and suggesting those that can reveal practical uses of visual tests to treat and manage migraine.
{"title":"A Review of Motion and Orientation Processing in Migraine.","authors":"Alex J Shepherd","doi":"10.3390/vision3020012","DOIUrl":"https://doi.org/10.3390/vision3020012","url":null,"abstract":"<p><p>Visual tests can be used as noninvasive tools to test models of the pathophysiology underlying neurological conditions, such as migraine. They may also be used to track changes in performance that vary with the migraine cycle or can track the efficacy of prophylactic treatments. This article reviews the literature on performance differences on two visual tasks, global motion discrimination and orientation, which, of the many visual tasks that have been used to compare differences between migraine and control groups, have yielded the most consistent patterns of group differences. The implications for understanding the underlying pathophysiology in migraine are discussed, but the main focus is on bringing together disparate areas of research and suggesting those that can reveal practical uses of visual tests to treat and manage migraine.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/vision3020012","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41214955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}