Alexander Goettker, Shannon M Locke, Karl R Gegenfurtner, Pascal Mamassian
For successful interactions with the world, we often have to evaluate our own performance. Although eye movements are one of the most frequent actions we perform, we are typically unaware of them. Here, we investigated whether there is any evidence for metacognitive sensitivity for the accuracy of eye movements. Participants tracked a dot cloud as it followed an unpredictable sinusoidal trajectory and then reported if they thought their performance was better or worse than their average tracking performance. Our results show above-chance identification of better tracking behavior across all trials and also for repeated attempts of the same target trajectories. Sensitivity in discriminating performance between better and worse trials was stable across sessions, but judgements within a trial relied more on performance in the final seconds. This behavior matched previous reports when judging the quality of hand movements, although overall metacognitive sensitivity for eye movements was significantly lower.
{"title":"Sensorimotor confidence for tracking eye movements.","authors":"Alexander Goettker, Shannon M Locke, Karl R Gegenfurtner, Pascal Mamassian","doi":"10.1167/jov.24.8.12","DOIUrl":"10.1167/jov.24.8.12","url":null,"abstract":"<p><p>For successful interactions with the world, we often have to evaluate our own performance. Although eye movements are one of the most frequent actions we perform, we are typically unaware of them. Here, we investigated whether there is any evidence for metacognitive sensitivity for the accuracy of eye movements. Participants tracked a dot cloud as it followed an unpredictable sinusoidal trajectory and then reported if they thought their performance was better or worse than their average tracking performance. Our results show above-chance identification of better tracking behavior across all trials and also for repeated attempts of the same target trajectories. Sensitivity in discriminating performance between better and worse trials was stable across sessions, but judgements within a trial relied more on performance in the final seconds. This behavior matched previous reports when judging the quality of hand movements, although overall metacognitive sensitivity for eye movements was significantly lower.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11363210/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142037573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual perception involves binding of distinct features into a unified percept. Although traditional theories link feature binding to time-consuming recurrent processes, Holcombe and Cavanagh (2001) demonstrated ultrafast, early binding of features that belong to the same object. The task required binding of orientation and luminance within an exceptionally short presentation time. However, because visual stimuli were presented over multiple presentation cycles, their findings can alternatively be explained by temporal integration over the extended stimulus sequence. Here, we conducted three experiments manipulating the number of presentation cycles. If early binding occurs, one extremely short cycle should be sufficient for feature integration. Conversely, late binding theories predict that successful binding requires substantial time and improves with additional presentation cycles. Our findings indicate that task-relevant binding of features from the same object occurs slowly, supporting late binding theories.
{"title":"Feature binding is slow: Temporal integration explains apparent ultrafast binding.","authors":"Lucija Blaževski, Timo Stein, H Steven Scholte","doi":"10.1167/jov.24.8.3","DOIUrl":"10.1167/jov.24.8.3","url":null,"abstract":"<p><p>Visual perception involves binding of distinct features into a unified percept. Although traditional theories link feature binding to time-consuming recurrent processes, Holcombe and Cavanagh (2001) demonstrated ultrafast, early binding of features that belong to the same object. The task required binding of orientation and luminance within an exceptionally short presentation time. However, because visual stimuli were presented over multiple presentation cycles, their findings can alternatively be explained by temporal integration over the extended stimulus sequence. Here, we conducted three experiments manipulating the number of presentation cycles. If early binding occurs, one extremely short cycle should be sufficient for feature integration. Conversely, late binding theories predict that successful binding requires substantial time and improves with additional presentation cycles. Our findings indicate that task-relevant binding of features from the same object occurs slowly, supporting late binding theories.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11309034/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141890723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Some locomotor tasks involve steering at high speeds through multiple waypoints within cluttered environments. Although in principle actors could treat each individual waypoint in isolation, skillful performance would seem to require them to adapt their trajectory to the most immediate waypoint in anticipation of subsequent waypoints. To date, there have been few studies of such behavior, and the evidence that does exist is inconclusive about whether steering is affected by multiple future waypoints. The present study was designed to address the need for a clearer understanding of how humans adapt their steering movements in anticipation of future goals. Subjects performed a simulated drone flying task in a forest-like virtual environment that was presented on a monitor while their eye movements were tracked. They were instructed to steer through a series of gates while the distance at which gates first became visible (i.e., lookahead distance) was manipulated between trials. When gates became visible at least 1-1/2 segments in advance, subjects successfully flew through a high percentage of gates, rarely collided with obstacles, and maintained a consistent speed. They also approached the most immediate gate in a way that depended on the angular position of the subsequent gate. However, when the lookahead distance was less than 1-1/2 segments, subjects followed longer paths and flew at slower, more variable speeds. The findings demonstrate that the control of steering through multiple waypoints does indeed depend on information from beyond the most immediate waypoint. Discussion focuses on the possible control strategies for steering through multiple waypoints.
{"title":"Prospective control of steering through multiple waypoints.","authors":"A J Jansen, Brett R Fajen","doi":"10.1167/jov.24.8.1","DOIUrl":"10.1167/jov.24.8.1","url":null,"abstract":"<p><p>Some locomotor tasks involve steering at high speeds through multiple waypoints within cluttered environments. Although in principle actors could treat each individual waypoint in isolation, skillful performance would seem to require them to adapt their trajectory to the most immediate waypoint in anticipation of subsequent waypoints. To date, there have been few studies of such behavior, and the evidence that does exist is inconclusive about whether steering is affected by multiple future waypoints. The present study was designed to address the need for a clearer understanding of how humans adapt their steering movements in anticipation of future goals. Subjects performed a simulated drone flying task in a forest-like virtual environment that was presented on a monitor while their eye movements were tracked. They were instructed to steer through a series of gates while the distance at which gates first became visible (i.e., lookahead distance) was manipulated between trials. When gates became visible at least 1-1/2 segments in advance, subjects successfully flew through a high percentage of gates, rarely collided with obstacles, and maintained a consistent speed. They also approached the most immediate gate in a way that depended on the angular position of the subsequent gate. However, when the lookahead distance was less than 1-1/2 segments, subjects followed longer paths and flew at slower, more variable speeds. The findings demonstrate that the control of steering through multiple waypoints does indeed depend on information from beyond the most immediate waypoint. Discussion focuses on the possible control strategies for steering through multiple waypoints.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11305437/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141861397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Contextual cueing is a phenomenon of visual statistical learning observed in visual search tasks. Previous research has found that the degree of deviation of items from its centroid, known as variability, determines the extent of generalization for that repeated scene. Introducing variability increases dissimilarity between multiple occurrences of the same repeated layout significantly. However, current theories do not explain the mechanisms that help to overcome this dissimilarity during contextual cue learning. We propose that the cognitive system initially abstracts specific scenes into scene layouts through an automatic clustering unrelated to specific repeated scenes, and subsequently uses these abstracted scene layouts for contextual cue learning. Experiment 1 indicates that introducing greater variability in search scenes leads to a hindering in the contextual cue learning. Experiment 2 further establishes that conducting extensive visual searches involving spatial variability in entirely novel scenes facilitates subsequent contextual cue learning involving corresponding scene variability, confirming that learning clustering knowledge precedes the contextual cue learning and is independent of specific repeated scenes. Overall, this study demonstrates the existence of multiple levels of learning in visual statistical learning, where item-level learning can serve as material for layout-level learning, and the generalization reflects the constraining role of item-level knowledge on layout-level knowledge.
{"title":"The visual statistical learning overcomes scene dissimilarity through an independent clustering process.","authors":"Xiaoyu Chen, Jie Wang, Qiang Liu","doi":"10.1167/jov.24.8.5","DOIUrl":"10.1167/jov.24.8.5","url":null,"abstract":"<p><p>Contextual cueing is a phenomenon of visual statistical learning observed in visual search tasks. Previous research has found that the degree of deviation of items from its centroid, known as variability, determines the extent of generalization for that repeated scene. Introducing variability increases dissimilarity between multiple occurrences of the same repeated layout significantly. However, current theories do not explain the mechanisms that help to overcome this dissimilarity during contextual cue learning. We propose that the cognitive system initially abstracts specific scenes into scene layouts through an automatic clustering unrelated to specific repeated scenes, and subsequently uses these abstracted scene layouts for contextual cue learning. Experiment 1 indicates that introducing greater variability in search scenes leads to a hindering in the contextual cue learning. Experiment 2 further establishes that conducting extensive visual searches involving spatial variability in entirely novel scenes facilitates subsequent contextual cue learning involving corresponding scene variability, confirming that learning clustering knowledge precedes the contextual cue learning and is independent of specific repeated scenes. Overall, this study demonstrates the existence of multiple levels of learning in visual statistical learning, where item-level learning can serve as material for layout-level learning, and the generalization reflects the constraining role of item-level knowledge on layout-level knowledge.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11314707/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141903376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Selassie Tagoh, Lisa M Hamm, Dietrich S Schwarzkopf, Steven C Dakin
Adaptation to flickering/dynamic noise improves visual acuity for briefly presented stimuli (Arnold et al., 2016). Here, we investigate whether such adaptation operates directly on our ability to see detail or by changing fixational eye movements and pupil size or by reducing visual crowding. Following earlier work, visual acuity was measured in observers who were either unadapted or who had adapted to a 60-Hz flickering noise pattern. Participants reported the orientation of a white tumbling-T target (four-alternative forced choice [4AFC], ⊤⊣⊥⊢). The target was presented for 110 ms either in isolation or flanked by randomly oriented T's (e.g., ⊣⊤⊢) followed by an isolated (+) or flanked (+++) mask, respectively. We measured fixation stability (using an infrared eye tracker) while observers performed the task (with and without adaptation). Visual acuity improved modestly (around 8.4%) for flanked optotypes following adaptation to flicker (mean, -0.038 ± 0.063 logMAR; p = 0.015; BF10 = 3.66) but did not when measured with isolated letters (mean, -0.008 ± 0.055 logMAR; p = 0.5; BF10 = 0.29). The magnitude of acuity improvement was associated with individuals' (unadapted) susceptibility to crowding (the ratio of crowded to uncrowded acuity; r = -0.58, p = 0.008, BF10 = 7.70) but to neither fixation stability nor pupil size. Confirming previous reports, flicker improved acuity for briefly presented stimuli, but we show that this was only the case for crowded letters. These improvements likely arise from attenuation of sensitivity to a transient low spatial frequency (SF) image structure (Arnold et al., 2016; Tagoh et al., 2022), which may, for example, reduce masking of high SFs by low SFs. We also suggest that this attenuation could reduce backward masking and so reduce foveal crowding.
{"title":"Flicker adaptation improves acuity for briefly presented stimuli by reducing crowding.","authors":"Selassie Tagoh, Lisa M Hamm, Dietrich S Schwarzkopf, Steven C Dakin","doi":"10.1167/jov.24.8.15","DOIUrl":"10.1167/jov.24.8.15","url":null,"abstract":"<p><p>Adaptation to flickering/dynamic noise improves visual acuity for briefly presented stimuli (Arnold et al., 2016). Here, we investigate whether such adaptation operates directly on our ability to see detail or by changing fixational eye movements and pupil size or by reducing visual crowding. Following earlier work, visual acuity was measured in observers who were either unadapted or who had adapted to a 60-Hz flickering noise pattern. Participants reported the orientation of a white tumbling-T target (four-alternative forced choice [4AFC], ⊤⊣⊥⊢). The target was presented for 110 ms either in isolation or flanked by randomly oriented T's (e.g., ⊣⊤⊢) followed by an isolated (+) or flanked (+++) mask, respectively. We measured fixation stability (using an infrared eye tracker) while observers performed the task (with and without adaptation). Visual acuity improved modestly (around 8.4%) for flanked optotypes following adaptation to flicker (mean, -0.038 ± 0.063 logMAR; p = 0.015; BF10 = 3.66) but did not when measured with isolated letters (mean, -0.008 ± 0.055 logMAR; p = 0.5; BF10 = 0.29). The magnitude of acuity improvement was associated with individuals' (unadapted) susceptibility to crowding (the ratio of crowded to uncrowded acuity; r = -0.58, p = 0.008, BF10 = 7.70) but to neither fixation stability nor pupil size. Confirming previous reports, flicker improved acuity for briefly presented stimuli, but we show that this was only the case for crowded letters. These improvements likely arise from attenuation of sensitivity to a transient low spatial frequency (SF) image structure (Arnold et al., 2016; Tagoh et al., 2022), which may, for example, reduce masking of high SFs by low SFs. We also suggest that this attenuation could reduce backward masking and so reduce foveal crowding.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11364176/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142082392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guillermo Salcedo-Villanueva, Catalina Becerra-Revollo, Luis Antonio Rhoads-Avila, Julian García-Sánchez, Flor Angélica Jácome-Gutierrez, Linda Cernichiaro-Espinosa, Andrée Henaine-Berra, Axel Orozco-Hernandez, Humberto Ruiz-García, Eduardo Torres-Porras
The perception of the ambiguous image of #TheDress may be influenced by optical factors, such as macular pigments. Their accumulation during childhood could increase with age and the ingestion of carotenoid-containing foods. The purpose of this study was to investigate whether the visual perception of the dress in children would differ based on age and carotenoid preference. This was a cross-sectional, observational, and comparative study. A poll was administered to children aged 2 to 10 years. Parents were instructed to inquire about the color of #TheDress from their children. A carotenoid preference survey was also completed. A total of 413 poll responses were analyzed. Responses were categorized based on the perceived color of the dress: blue/black (BB) (n = 204) and white/gold (WG) (n = 209). The mean and median age of the WG group was higher than the BB group (mean 6.1, median 6.0 years, standard deviation [SD] 2.2; mean 5.5, median 5.0 years, SD 2.3; p = 0.007). Spearman correlation between age and group was 0.133 (p = 0.007). Green-leaf preference (GLP) showed a statistically significant difference between groups (Mann-Whitney U: p = 0.038). Spearman correlation between GLP and group was 0.102 (p = 0.037). Logistic regression for the perception of the dress as WG indicated that age and GLP were significant predictors (age: B weight 0.109, p = 0.012, odds ratio: 1.115; GLP: B weight 0.317, p = 0.033, odds ratio: 1.373). Older children and those with a higher GLP were more likely to perceive #TheDress as WG. These results suggest a potential relationship with the gradual accumulation of macular pigments throughout a child's lifetime.
{"title":"Perception of #TheDress in childhood is influenced by age and green-leaf preference.","authors":"Guillermo Salcedo-Villanueva, Catalina Becerra-Revollo, Luis Antonio Rhoads-Avila, Julian García-Sánchez, Flor Angélica Jácome-Gutierrez, Linda Cernichiaro-Espinosa, Andrée Henaine-Berra, Axel Orozco-Hernandez, Humberto Ruiz-García, Eduardo Torres-Porras","doi":"10.1167/jov.24.8.11","DOIUrl":"10.1167/jov.24.8.11","url":null,"abstract":"<p><p>The perception of the ambiguous image of #TheDress may be influenced by optical factors, such as macular pigments. Their accumulation during childhood could increase with age and the ingestion of carotenoid-containing foods. The purpose of this study was to investigate whether the visual perception of the dress in children would differ based on age and carotenoid preference. This was a cross-sectional, observational, and comparative study. A poll was administered to children aged 2 to 10 years. Parents were instructed to inquire about the color of #TheDress from their children. A carotenoid preference survey was also completed. A total of 413 poll responses were analyzed. Responses were categorized based on the perceived color of the dress: blue/black (BB) (n = 204) and white/gold (WG) (n = 209). The mean and median age of the WG group was higher than the BB group (mean 6.1, median 6.0 years, standard deviation [SD] 2.2; mean 5.5, median 5.0 years, SD 2.3; p = 0.007). Spearman correlation between age and group was 0.133 (p = 0.007). Green-leaf preference (GLP) showed a statistically significant difference between groups (Mann-Whitney U: p = 0.038). Spearman correlation between GLP and group was 0.102 (p = 0.037). Logistic regression for the perception of the dress as WG indicated that age and GLP were significant predictors (age: B weight 0.109, p = 0.012, odds ratio: 1.115; GLP: B weight 0.317, p = 0.033, odds ratio: 1.373). Older children and those with a higher GLP were more likely to perceive #TheDress as WG. These results suggest a potential relationship with the gradual accumulation of macular pigments throughout a child's lifetime.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11353488/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142019359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Corrections to: Exploring the extent to which shared mechanisms contribute to motion-position illusions.","authors":"","doi":"10.1167/jov.24.8.9","DOIUrl":"10.1167/jov.24.8.9","url":null,"abstract":"","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11346168/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142005636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dom C P Marticorena, Quinn Wai Wong, Jake Browning, Ken Wilbur, Pinakin Gunvant Davey, Aaron R Seitz, Jacob R Gardner, Dennis L Barbour
Recent advances in nonparametric contrast sensitivity function (CSF) estimation have yielded a new tradeoff between accuracy and efficiency not available to classical parametric estimators. An additional advantage of this new framework is the ability to independently tune multiple aspects of the estimator to seek further improvements. Machine learning CSF estimation with Gaussian processes allows for design optimization in the kernel, acquisition function, and underlying task representation, to name a few. This article describes a novel kernel for CSF estimation that is more flexible than a kernel based on strictly functional forms. Despite being more flexible, it can result in a more efficient estimator. Further, trial selection for data acquisition that is generalized beyond pure information gain can also improve estimator quality. Finally, introducing latent variable representations underlying general CSF shapes can enable simultaneous estimation of multiple CSFs, such as from different eyes, eccentricities, or luminances. The conditions under which the new procedures perform better than previous nonparametric estimation procedures are presented and quantified.
{"title":"Active mutual conjoint estimation of multiple contrast sensitivity functions.","authors":"Dom C P Marticorena, Quinn Wai Wong, Jake Browning, Ken Wilbur, Pinakin Gunvant Davey, Aaron R Seitz, Jacob R Gardner, Dennis L Barbour","doi":"10.1167/jov.24.8.6","DOIUrl":"10.1167/jov.24.8.6","url":null,"abstract":"<p><p>Recent advances in nonparametric contrast sensitivity function (CSF) estimation have yielded a new tradeoff between accuracy and efficiency not available to classical parametric estimators. An additional advantage of this new framework is the ability to independently tune multiple aspects of the estimator to seek further improvements. Machine learning CSF estimation with Gaussian processes allows for design optimization in the kernel, acquisition function, and underlying task representation, to name a few. This article describes a novel kernel for CSF estimation that is more flexible than a kernel based on strictly functional forms. Despite being more flexible, it can result in a more efficient estimator. Further, trial selection for data acquisition that is generalized beyond pure information gain can also improve estimator quality. Finally, introducing latent variable representations underlying general CSF shapes can enable simultaneous estimation of multiple CSFs, such as from different eyes, eccentricities, or luminances. The conditions under which the new procedures perform better than previous nonparametric estimation procedures are presented and quantified.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11314691/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141903374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kambiz Esfandi, Saeedeh Afsar, Kate Richards, Duncan Hedderley, Samuel D J Brown, Adriana Najar-Rodriguez, Mike Ormsby
Examination of imported commodities by trained inspectors searching for pest organisms is a common practice that phytosanitary regulatory agencies use to mitigate biosecurity risks along trade pathways. To investigate the effects of target size and color on the efficacy of these visual assessments, we affixed square decals to polystyrene models of mandarins. Sample units of 100 model fruit containing up to 10 marked models were examined by inspectors. Six sizes in six shades of brown were tested across two prevalence levels. The experiment consisted of five inspection rounds where 11 inspectors examined 77 sample units within an allocated time. The probability that decals were detected increased with mark size and color contrast. Smaller, low-contrast marks were mainly missed. The prevalence rate did not affect the detectability. Through the experiment, the false-positive rate dropped from 6% to 3%, whereas false-negative rates were constant throughout. Large, dark targets were readily found with a mean recall of >90%, whereas small, pale marks had a mean recall of 9%. Increased experience made inspectors more competent at recognizing decals, reducing the false positive rate. However, constant false-negative rates indicate that experience did not prevent inspectors from overlooking targets they could not perceive.
{"title":"Determining the efficacy of visual inspections at detecting non-biosecurity-compliant goods.","authors":"Kambiz Esfandi, Saeedeh Afsar, Kate Richards, Duncan Hedderley, Samuel D J Brown, Adriana Najar-Rodriguez, Mike Ormsby","doi":"10.1167/jov.24.8.8","DOIUrl":"10.1167/jov.24.8.8","url":null,"abstract":"<p><p>Examination of imported commodities by trained inspectors searching for pest organisms is a common practice that phytosanitary regulatory agencies use to mitigate biosecurity risks along trade pathways. To investigate the effects of target size and color on the efficacy of these visual assessments, we affixed square decals to polystyrene models of mandarins. Sample units of 100 model fruit containing up to 10 marked models were examined by inspectors. Six sizes in six shades of brown were tested across two prevalence levels. The experiment consisted of five inspection rounds where 11 inspectors examined 77 sample units within an allocated time. The probability that decals were detected increased with mark size and color contrast. Smaller, low-contrast marks were mainly missed. The prevalence rate did not affect the detectability. Through the experiment, the false-positive rate dropped from 6% to 3%, whereas false-negative rates were constant throughout. Large, dark targets were readily found with a mean recall of >90%, whereas small, pale marks had a mean recall of 9%. Increased experience made inspectors more competent at recognizing decals, reducing the false positive rate. However, constant false-negative rates indicate that experience did not prevent inspectors from overlooking targets they could not perceive.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11343003/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigated whether adaptation from implied motion (IM) is transferred to real motion using optokinetic nystagmus (OKN) in infants. Specifically, we examined whether viewing a series of images depicting motion shifted infants' OKN responses to the opposite direction of random dot kinematograms (RDKs). Each RDK was presented 10 times in a pre-test, followed by 10 trials of IM adaptation and test. During the pre-test, the signal dots of the RDK moved left or right. During IM adaptation, 10 randomly selected images depicting leftward (or rightward) IM were presented. In the test, the RDK was presented immediately after the last IM image. An observer, blinded to the motion direction, assessed the OKN direction. The number of matches in OKN responses for each RDK direction was calculated as the match ratio of OKN. We conducted a two-way mixed analysis of variance, with age group (5-6 months and 7-8 months) as the between-participant factor and adaptation (pre-test and test) as the within-participant factor. Only in 7-8 months the OKN responses were shifted in the opposite direction of RDK by viewing a series of images depicting motion, and these infants could detect both IM and RDK motion directions in the pre-test. Our results indicate that detecting the IM and RDK directions might induce direction-selective adaptation in 7-8 months.
我们利用婴儿的视运动眼震(OKN)研究了隐含运动(IM)的适应是否会转移到真实运动中。具体来说,我们研究了观看一系列描述运动的图像是否会使婴儿的 OKN 反应转向随机点运动图(RDK)的相反方向。每个 RDK 在预测试中呈现 10 次,然后进行 10 次 IM 适应和测试。在预测试期间,随机点运动图的信号点向左或向右移动。在 IM 适应过程中,随机选择 10 幅描述向左(或向右)IM 的图像。在测试中,RDK 紧随最后一幅 IM 图像之后出现。一名对运动方向视而不见的观察者对 OKN 方向进行评估。每个 RDK 方向的 OKN 反应的匹配数被计算为 OKN 的匹配率。我们进行了双向混合方差分析,年龄组(5-6 个月和 7-8 个月)为参与者间因素,适应(测试前和测试)为参与者内因素。只有 7-8 个月大的婴儿在观看一系列运动图像时,OKN 反应才会向 RDK 的反方向移动,而且这些婴儿在前测中既能检测到 IM 运动方向,也能检测到 RDK 运动方向。我们的结果表明,在 7-8 个月时,检测 IM 和 RDK 方向可能会诱发方向选择性适应。
{"title":"Direction-selective adaptation from implied motion in infancy.","authors":"Riku Umekawa, So Kanazawa, Masami K Yamaguchi","doi":"10.1167/jov.24.8.7","DOIUrl":"10.1167/jov.24.8.7","url":null,"abstract":"<p><p>We investigated whether adaptation from implied motion (IM) is transferred to real motion using optokinetic nystagmus (OKN) in infants. Specifically, we examined whether viewing a series of images depicting motion shifted infants' OKN responses to the opposite direction of random dot kinematograms (RDKs). Each RDK was presented 10 times in a pre-test, followed by 10 trials of IM adaptation and test. During the pre-test, the signal dots of the RDK moved left or right. During IM adaptation, 10 randomly selected images depicting leftward (or rightward) IM were presented. In the test, the RDK was presented immediately after the last IM image. An observer, blinded to the motion direction, assessed the OKN direction. The number of matches in OKN responses for each RDK direction was calculated as the match ratio of OKN. We conducted a two-way mixed analysis of variance, with age group (5-6 months and 7-8 months) as the between-participant factor and adaptation (pre-test and test) as the within-participant factor. Only in 7-8 months the OKN responses were shifted in the opposite direction of RDK by viewing a series of images depicting motion, and these infants could detect both IM and RDK motion directions in the pre-test. Our results indicate that detecting the IM and RDK directions might induce direction-selective adaptation in 7-8 months.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11343005/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}