This discussion paper supplements our two theoretical contributions previously published in this journal on the geometric nature of visual space. We first show here how our Riemannian formulation explains the recent experimental finding (published in this special issue on size constancy) that, contrary to conclusions from past work, vergence does not affect perceived size. We then turn to afterimage experiments connected to that work. Beginning with the Taylor illusion, we explore how our proposed Riemannian visual-somatosensory-hippocampal association memory network accounts in the following way for perceptions that occur when afterimages are viewed in conjunction with body movement. The Riemannian metric incorporated in the association memory network accurately emulates the warping of 3D visual space that is intrinsically introduced by the eye. The network thus accurately anticipates the change in size of retinal images of objects with a change in Euclidean distance between the egocentre and the object. An object will only be perceived to change in size when there is a difference between the actual size of its image on the retina and the anticipated size of that image provided by the network. This provides a central mechanism for size constancy. If the retinal image is the afterimage of a body part, typically a hand, and that hand moves relative to the egocentre, the afterimage remains constant but the proprioceptive signals change to give the new hand position. When the network gives the anticipated size of the hand at its new position this no longer matches the fixed afterimage, hence a size-change illusion occurs.
{"title":"The Riemannian Geometry Theory of Visually-Guided Movement Accounts for Afterimage Illusions and Size Constancy.","authors":"Peter D Neilson, Megan D Neilson, Robin T Bye","doi":"10.3390/vision6020037","DOIUrl":"https://doi.org/10.3390/vision6020037","url":null,"abstract":"<p><p>This discussion paper supplements our two theoretical contributions previously published in this journal on the geometric nature of visual space. We first show here how our Riemannian formulation explains the recent experimental finding (published in this special issue on size constancy) that, contrary to conclusions from past work, vergence does not affect perceived size. We then turn to afterimage experiments connected to that work. Beginning with the Taylor illusion, we explore how our proposed Riemannian visual-somatosensory-hippocampal association memory network accounts in the following way for perceptions that occur when afterimages are viewed in conjunction with body movement. The Riemannian metric incorporated in the association memory network accurately emulates the warping of 3D visual space that is intrinsically introduced by the eye. The network thus accurately anticipates the change in size of retinal images of objects with a change in Euclidean distance between the egocentre and the object. An object will only be perceived to change in size when there is a difference between the actual size of its image on the retina and the anticipated size of that image provided by the network. This provides a central mechanism for size constancy. If the retinal image is the afterimage of a body part, typically a hand, and that hand moves relative to the egocentre, the afterimage remains constant but the proprioceptive signals change to give the new hand position. When the network gives the anticipated size of the hand at its new position this no longer matches the fixed afterimage, hence a size-change illusion occurs.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9231332/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40269826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
What is fundamental in vision has been discussed for millennia. For philosophical realists and the physiological approach to vision, the objects of the outer world are truly given, and failures to perceive objects properly, such as in illusions, are just sporadic misperceptions. The goal is to replace the subjectivity of the mind by careful physiological analyses. Continental philosophy and the Gestaltists are rather skeptical or ignorant about external objects. The percepts themselves are their starting point, because it is hard to deny the truth of one own's percepts. I will show that, whereas both approaches can well explain many visual phenomena with classic visual stimuli, they both have trouble when stimuli become slightly more complex. I suggest that these failures have a deeper conceptual reason, namely that their foundations (objects, percepts) do not hold true. I propose that only physical states exist in a mind independent manner and that everyday objects, such as bottles and trees, are perceived in a mind-dependent way. The fundamental processing units to process objects are extended windows of unconscious processing, followed by short, discrete conscious percepts.
{"title":"The Irreducibility of Vision: Gestalt, Crowding and the Fundamentals of Vision.","authors":"Michael H Herzog","doi":"10.3390/vision6020035","DOIUrl":"https://doi.org/10.3390/vision6020035","url":null,"abstract":"<p><p>What is fundamental in vision has been discussed for millennia. For philosophical realists and the physiological approach to vision, the objects of the outer world are truly given, and failures to perceive objects properly, such as in illusions, are just sporadic misperceptions. The goal is to replace the subjectivity of the mind by careful physiological analyses. Continental philosophy and the Gestaltists are rather skeptical or ignorant about external objects. The percepts themselves are their starting point, because it is hard to deny the truth of one own's percepts. I will show that, whereas both approaches can well explain many visual phenomena with classic visual stimuli, they both have trouble when stimuli become slightly more complex. I suggest that these failures have a deeper conceptual reason, namely that their foundations (objects, percepts) do not hold true. I propose that only physical states exist in a mind independent manner and that everyday objects, such as bottles and trees, are perceived in a mind-dependent way. The fundamental processing units to process objects are extended windows of unconscious processing, followed by short, discrete conscious percepts.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9228288/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40269957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A growing body of literature offers exciting perspectives on the use of brain stimulation to boost training-related perceptual improvements in humans. Recent studies suggest that combining visual perceptual learning (VPL) training with concomitant transcranial electric stimulation (tES) leads to learning rate and generalization effects larger than each technique used individually. Both VPL and tES have been used to induce neural plasticity in brain regions involved in visual perception, leading to long-lasting visual function improvements. Despite being more than a century old, only recently have these techniques been combined in the same paradigm to further improve visual performance in humans. Nonetheless, promising evidence in healthy participants and in clinical population suggests that the best could still be yet to come for the combined use of VPL and tES. In the first part of this perspective piece, we briefly discuss the history, the characteristics, the results and the possible mechanisms behind each technique and their combined effect. In the second part, we discuss relevant aspects concerning the use of these techniques and propose a perspective concerning the combined use of electric brain stimulation and perceptual learning in the visual system, closing with some open questions on the topic.
{"title":"Perspectives on the Combined Use of Electric Brain Stimulation and Perceptual Learning in Vision.","authors":"Marcello Maniglia","doi":"10.3390/vision6020033","DOIUrl":"https://doi.org/10.3390/vision6020033","url":null,"abstract":"<p><p>A growing body of literature offers exciting perspectives on the use of brain stimulation to boost training-related perceptual improvements in humans. Recent studies suggest that combining visual perceptual learning (VPL) training with concomitant transcranial electric stimulation (tES) leads to learning rate and generalization effects larger than each technique used individually. Both VPL and tES have been used to induce neural plasticity in brain regions involved in visual perception, leading to long-lasting visual function improvements. Despite being more than a century old, only recently have these techniques been combined in the same paradigm to further improve visual performance in humans. Nonetheless, promising evidence in healthy participants and in clinical population suggests that the best could still be yet to come for the combined use of VPL and tES. In the first part of this perspective piece, we briefly discuss the history, the characteristics, the results and the possible mechanisms behind each technique and their combined effect. In the second part, we discuss relevant aspects concerning the use of these techniques and propose a perspective concerning the combined use of electric brain stimulation and perceptual learning in the visual system, closing with some open questions on the topic.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9227313/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40269959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Matilda Helena Cederblad, Juho Äijälä, Søren Krogh Andersen, Mary Joan MacLeod, Arash Sahraie
Multisensory stimulation is associated with behavioural benefits, including faster processing speed, higher detection accuracy, and increased subjective awareness. These effects are most likely explained by multisensory integration, alertness, or a combination of the two. To examine changes in subjective awareness under multisensory stimulation, we conducted three experiments in which we used Continuous Flash Suppression to mask subthreshold visual targets for healthy observers. Using the Perceptual Awareness Scale, participants reported their level of awareness of the visual target on a trial-by-trial basis. The first experiment had an audio-visual Redundant Signal Effect paradigm, in which we found faster reaction times in the audio-visual condition compared to responses to auditory or visual signals alone. In two following experiments, we separated the auditory and visual signals, first spatially (experiment 2) and then temporally (experiment 3), to test whether the behavioural benefits in our multisensory stimulation paradigm could best be explained by multisensory integration or increased phasic alerting. Based on the findings, we conclude that the largest contributing factor to increased awareness of visual stimuli accompanied by auditory tones is a rise in phasic alertness and a reduction in temporal uncertainty with a small but significant contribution of multisensory integration.
{"title":"Phasic Alertness and Multisensory Integration Contribute to Visual Awareness of Weak Visual Targets in Audio-Visual Stimulation under Continuous Flash Suppression.","authors":"Anna Matilda Helena Cederblad, Juho Äijälä, Søren Krogh Andersen, Mary Joan MacLeod, Arash Sahraie","doi":"10.3390/vision6020031","DOIUrl":"https://doi.org/10.3390/vision6020031","url":null,"abstract":"<p><p>Multisensory stimulation is associated with behavioural benefits, including faster processing speed, higher detection accuracy, and increased subjective awareness. These effects are most likely explained by multisensory integration, alertness, or a combination of the two. To examine changes in subjective awareness under multisensory stimulation, we conducted three experiments in which we used Continuous Flash Suppression to mask subthreshold visual targets for healthy observers. Using the Perceptual Awareness Scale, participants reported their level of awareness of the visual target on a trial-by-trial basis. The first experiment had an audio-visual Redundant Signal Effect paradigm, in which we found faster reaction times in the audio-visual condition compared to responses to auditory or visual signals alone. In two following experiments, we separated the auditory and visual signals, first spatially (experiment 2) and then temporally (experiment 3), to test whether the behavioural benefits in our multisensory stimulation paradigm could best be explained by multisensory integration or increased phasic alerting. Based on the findings, we conclude that the largest contributing factor to increased awareness of visual stimuli accompanied by auditory tones is a rise in phasic alertness and a reduction in temporal uncertainty with a small but significant contribution of multisensory integration.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9228768/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40269955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rita Donato, Andrea Pavan, Giovanni Cavallin, Lamberto Ballan, Luca Betteto, Massimo Nucci, Gianluca Campana
Dynamic Glass patterns (GPs) are visual stimuli commonly employed to study form-motion interactions. There is brain imaging evidence that non-directional motion induced by dynamic GPs and directional motion induced by random dot kinematograms (RDKs) depend on the activity of the human motion complex (hMT+). However, whether dynamic GPs and RDKs rely on the same processing mechanisms is still up for dispute. The current study uses a visual perceptual learning (VPL) paradigm to try to answer this question. Identical pre- and post-tests were given to two groups of participants, who had to discriminate random/noisy patterns from coherent form (dynamic GPs) and motion (RDKs). Subsequently, one group was trained on dynamic translational GPs, whereas the other group on RDKs. On the one hand, the generalization of learning to the non-trained stimulus would indicate that the same mechanisms are involved in the processing of both dynamic GPs and RDKs. On the other hand, learning specificity would indicate that the two stimuli are likely to be processed by separate mechanisms possibly in the same cortical network. The results showed that VPL is specific to the stimulus trained, suggesting that directional and non-directional motion may depend on different neural mechanisms.
{"title":"Mechanisms Underlying Directional Motion Processing and Form-Motion Integration Assessed with Visual Perceptual Learning.","authors":"Rita Donato, Andrea Pavan, Giovanni Cavallin, Lamberto Ballan, Luca Betteto, Massimo Nucci, Gianluca Campana","doi":"10.3390/vision6020029","DOIUrl":"https://doi.org/10.3390/vision6020029","url":null,"abstract":"<p><p>Dynamic Glass patterns (GPs) are visual stimuli commonly employed to study form-motion interactions. There is brain imaging evidence that non-directional motion induced by dynamic GPs and directional motion induced by random dot kinematograms (RDKs) depend on the activity of the human motion complex (hMT+). However, whether dynamic GPs and RDKs rely on the same processing mechanisms is still up for dispute. The current study uses a visual perceptual learning (VPL) paradigm to try to answer this question. Identical pre- and post-tests were given to two groups of participants, who had to discriminate random/noisy patterns from coherent form (dynamic GPs) and motion (RDKs). Subsequently, one group was trained on dynamic translational GPs, whereas the other group on RDKs. On the one hand, the generalization of learning to the non-trained stimulus would indicate that the same mechanisms are involved in the processing of both dynamic GPs and RDKs. On the other hand, learning specificity would indicate that the two stimuli are likely to be processed by separate mechanisms possibly in the same cortical network. The results showed that VPL is specific to the stimulus trained, suggesting that directional and non-directional motion may depend on different neural mechanisms.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9229663/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40269953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visuospatial working memory (WM) requires the activity of a spread network, including right parietal regions, to sustain storage capacity, attentional deployment, and active manipulation of information. Notably, while the electrophysiological correlates of such regions have been explored using many different indices, evidence for a functional involvement of the individual frequency peaks in the alpha (IAF) and theta bands (ITF) is still poor despite their relevance in many influential theories regarding WM. Interestingly, there is also a parallel lack of literature about the effect of short-term practice on WM performance. Here, we aim to clarify whether the simple repetition of a change-detection task might be beneficial to WM performance and to which degree these effects could be predicted by IAF and ITF. For this purpose, 25 healthy participants performed a change-detection task at baseline and in a retest session, while IAF and ITF were also measured. Results show that task repetition improves WM performance. In addition, right parietal IAF, but not ITF, accounts for performance gain such that faster IAF predicts higher performance gain. Our findings align with recent literature suggesting that the faster the posterior alpha, the finer the perceptual sampling rate, and the higher the WM performance gain.
{"title":"Parietal Alpha Oscillatory Peak Frequency Mediates the Effect of Practice on Visuospatial Working Memory Performance.","authors":"Riccardo Bertaccini, Giulia Ellena, Joaquin Macedo-Pascual, Fabrizio Carusi, Jelena Trajkovic, Claudia Poch, Vincenzo Romei","doi":"10.3390/vision6020030","DOIUrl":"https://doi.org/10.3390/vision6020030","url":null,"abstract":"<p><p>Visuospatial working memory (WM) requires the activity of a spread network, including right parietal regions, to sustain storage capacity, attentional deployment, and active manipulation of information. Notably, while the electrophysiological correlates of such regions have been explored using many different indices, evidence for a functional involvement of the individual frequency peaks in the alpha (IAF) and theta bands (ITF) is still poor despite their relevance in many influential theories regarding WM. Interestingly, there is also a parallel lack of literature about the effect of short-term practice on WM performance. Here, we aim to clarify whether the simple repetition of a change-detection task might be beneficial to WM performance and to which degree these effects could be predicted by IAF and ITF. For this purpose, 25 healthy participants performed a change-detection task at baseline and in a retest session, while IAF and ITF were also measured. Results show that task repetition improves WM performance. In addition, right parietal IAF, but not ITF, accounts for performance gain such that faster IAF predicts higher performance gain. Our findings align with recent literature suggesting that the faster the posterior alpha, the finer the perceptual sampling rate, and the higher the WM performance gain.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9230002/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40269956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elia Gatto, Olli J Loukola, Maria Elena Miletto Petrazzini, Christian Agrillo, Simone Cutini
For two centuries, visual illusions have attracted the attention of neurobiologists and comparative psychologists, given the possibility of investigating the complexity of perceptual mechanisms by using relatively simple patterns. Animal models, such as primates, birds, and fish, have played a crucial role in understanding the physiological circuits involved in the susceptibility of visual illusions. However, the comprehension of such mechanisms is still a matter of debate. Despite their different neural architectures, recent studies have shown that some arthropods, primarily Hymenoptera and Diptera, experience illusions similar to those humans do, suggesting that perceptual mechanisms are evolutionarily conserved among species. Here, we review the current state of illusory perception in bees. First, we introduce bees' visual system and speculate which areas might make them susceptible to illusory scenes. Second, we review the current state of knowledge on misperception in bees (Apidae), focusing on the visual stimuli used in the literature. Finally, we discuss important aspects to be considered before claiming that a species shows higher cognitive ability while equally supporting alternative hypotheses. This growing evidence provides insights into the evolutionary origin of visual mechanisms across species.
{"title":"Illusional Perspective across Humans and Bees.","authors":"Elia Gatto, Olli J Loukola, Maria Elena Miletto Petrazzini, Christian Agrillo, Simone Cutini","doi":"10.3390/vision6020028","DOIUrl":"10.3390/vision6020028","url":null,"abstract":"<p><p>For two centuries, visual illusions have attracted the attention of neurobiologists and comparative psychologists, given the possibility of investigating the complexity of perceptual mechanisms by using relatively simple patterns. Animal models, such as primates, birds, and fish, have played a crucial role in understanding the physiological circuits involved in the susceptibility of visual illusions. However, the comprehension of such mechanisms is still a matter of debate. Despite their different neural architectures, recent studies have shown that some arthropods, primarily Hymenoptera and Diptera, experience illusions similar to those humans do, suggesting that perceptual mechanisms are evolutionarily conserved among species. Here, we review the current state of illusory perception in bees. First, we introduce bees' visual system and speculate which areas might make them susceptible to illusory scenes. Second, we review the current state of knowledge on misperception in bees (Apidae), focusing on the visual stimuli used in the literature. Finally, we discuss important aspects to be considered before claiming that a species shows higher cognitive ability while equally supporting alternative hypotheses. This growing evidence provides insights into the evolutionary origin of visual mechanisms across species.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9231007/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40269954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent evidence suggesting that object detection is improved following valid rather than invalid labels implies that semantics influence object detection. It is not clear, however, whether the results index object detection or feature detection. Further, because control conditions were absent and labels and objects were repeated multiple times, the mechanisms are unknown. We assessed object detection via figure assignment, whereby objects are segmented from backgrounds. Masked bipartite displays depicting a portion of a mono-oriented object (a familiar configuration) on one side of a central border were shown once only for 90 or 100 ms. Familiar configuration is a figural prior. Accurate detection was indexed by reports of an object on the familiar configuration side of the border. Compared to control experiments without labels, valid labels improved accuracy and reduced response times (RTs) more for upright than inverted objects (Studies 1 and 2). Invalid labels denoting different superordinate-level objects (DSC; Study 1) or same superordinate-level objects (SSC; Study 2) reduced accuracy for upright displays only. Orientation dependency indicates that effects are mediated by activated object representations rather than features which are invariant over orientation. Following invalid SSC labels (Study 2), accurate detection RTs were longer than control for both orientations, implicating conflict between semantic representations that had to be resolved before object detection. These results demonstrate that object detection is not just affected by semantics, it entails semantics.
{"title":"Semantic Expectation Effects on Object Detection: Using Figure Assignment to Elucidate Mechanisms.","authors":"Rachel M Skocypec, Mary A Peterson","doi":"10.3390/vision6010019","DOIUrl":"https://doi.org/10.3390/vision6010019","url":null,"abstract":"<p><p>Recent evidence suggesting that object detection is improved following valid rather than invalid labels implies that semantics influence object detection. It is not clear, however, whether the results index object detection or feature detection. Further, because control conditions were absent and labels and objects were repeated multiple times, the mechanisms are unknown. We assessed object detection via figure assignment, whereby objects are segmented from backgrounds. Masked bipartite displays depicting a portion of a mono-oriented object (a familiar configuration) on one side of a central border were shown once only for 90 or 100 ms. Familiar configuration is a figural prior. Accurate detection was indexed by reports of an object on the familiar configuration side of the border. Compared to control experiments without labels, valid labels improved accuracy and reduced response times (RTs) more for upright than inverted objects (Studies 1 and 2). Invalid labels denoting different superordinate-level objects (DSC; Study 1) or same superordinate-level objects (SSC; Study 2) reduced accuracy for upright displays only. Orientation dependency indicates that effects are mediated by activated object representations rather than features which are invariant over orientation. Following invalid SSC labels (Study 2), accurate detection RTs were longer than control for both orientations, implicating conflict between semantic representations that had to be resolved before object detection. These results demonstrate that object detection is not just affected by semantics, it entails semantics.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8953613/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40318797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
María García-Montero, Gema Felipe-Márquez, Pedro Arriola-Villalobos, Nuria Garzón
This review has identified evidence about pseudomyopia as the result of an increase in ocular refractive power due to an overstimulation of the eye's accommodative mechanism. It cannot be confused with the term "secondary myopia", which includes transient myopic shifts caused by lenticular refractive index changes and myopia associated with systemic syndromes. The aim was to synthesize the literature on qualitative evidence about pseudomyopia in terms that clarify its pathophysiology, clinical presentation, assessment and diagnosis and treatment. A comprehensive literature search of PubMed and the Scopus database was carried out for articles published up to November 2021, without a data limit. This review was reported following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines. Following inclusion and exclusion criteria, a total of 54 studies were included in the qualitative synthesis. The terms pseudomyopia and accommodation spasm have been found in most of the studies reviewed. The review has warned that although there is agreement on the assessment and diagnosis of the condition, there is no consensus on its management, and the literature describes a range of treatment.
{"title":"Pseudomyopia: A Review.","authors":"María García-Montero, Gema Felipe-Márquez, Pedro Arriola-Villalobos, Nuria Garzón","doi":"10.3390/vision6010017","DOIUrl":"https://doi.org/10.3390/vision6010017","url":null,"abstract":"<p><p>This review has identified evidence about pseudomyopia as the result of an increase in ocular refractive power due to an overstimulation of the eye's accommodative mechanism. It cannot be confused with the term \"secondary myopia\", which includes transient myopic shifts caused by lenticular refractive index changes and myopia associated with systemic syndromes. The aim was to synthesize the literature on qualitative evidence about pseudomyopia in terms that clarify its pathophysiology, clinical presentation, assessment and diagnosis and treatment. A comprehensive literature search of PubMed and the Scopus database was carried out for articles published up to November 2021, without a data limit. This review was reported following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines. Following inclusion and exclusion criteria, a total of 54 studies were included in the qualitative synthesis. The terms pseudomyopia and accommodation spasm have been found in most of the studies reviewed. The review has warned that although there is agreement on the assessment and diagnosis of the condition, there is no consensus on its management, and the literature describes a range of treatment.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8950661/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40318796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human memory consists of sensory memory (SM), short-term memory (STM), and long-term memory (LTM). SM enables a large capacity, but decays rapidly. STM has limited capacity, but lasts longer. The traditional view of these memory systems resembles a leaky hourglass, the large top and bottom portions representing the large capacities of SM and LTM, whereas the narrow portion in the middle represents the limited capacity of STM. The "leak" in the top part of the hourglass depicts the rapid decay of the contents of SM. However, recently, it was shown that major bottlenecks for motion processing exist prior to STM, and the "leaky hourglass" model was replaced by a "leaky flask" model with a narrower top part to capture bottlenecks prior to STM. The leaky flask model was based on data from one study, and the first goal of the current paper was to test if the leaky flask model would generalize by using a different set of data. The second goal of the paper was to explore various block diagram models for memory systems and determine the one best supported by the data. We expressed these block diagram models in terms of statistical mixture models and, by using the Bayesian information criterion (BIC), found that a model with four components, viz., SM, attention, STM, and guessing, provided the best fit to our data. In summary, we generalized previous findings about early qualitative and quantitative bottlenecks, as expressed in the leaky flask model and showed that a four-process model can provide a good explanation for how visual information is processed and stored in memory.
{"title":"Capacity and Allocation across Sensory and Short-Term Memories.","authors":"Shaoying Wang, Srimant P Tripathy, Haluk Öğmen","doi":"10.3390/vision6010015","DOIUrl":"https://doi.org/10.3390/vision6010015","url":null,"abstract":"<p><p>Human memory consists of sensory memory (SM), short-term memory (STM), and long-term memory (LTM). SM enables a large capacity, but decays rapidly. STM has limited capacity, but lasts longer. The traditional view of these memory systems resembles a leaky hourglass, the large top and bottom portions representing the large capacities of SM and LTM, whereas the narrow portion in the middle represents the limited capacity of STM. The \"leak\" in the top part of the hourglass depicts the rapid decay of the contents of SM. However, recently, it was shown that major bottlenecks for motion processing exist prior to STM, and the \"leaky hourglass\" model was replaced by a \"leaky flask\" model with a narrower top part to capture bottlenecks prior to STM. The leaky flask model was based on data from one study, and the first goal of the current paper was to test if the leaky flask model would generalize by using a different set of data. The second goal of the paper was to explore various block diagram models for memory systems and determine the one best supported by the data. We expressed these block diagram models in terms of statistical mixture models and, by using the Bayesian information criterion (BIC), found that a model with four components, viz., SM, attention, STM, and guessing, provided the best fit to our data. In summary, we generalized previous findings about early qualitative and quantitative bottlenecks, as expressed in the leaky flask model and showed that a four-process model can provide a good explanation for how visual information is processed and stored in memory.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8955927/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40318795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}