Pub Date : 2026-02-07DOI: 10.1186/s41235-025-00690-x
Zeynep G Özkan, Ana Baciero, Manuel Perea, Pablo Gómez
Braille is a tactile writing system that enables individuals to read through the sense of touch. Although letter recognition research in the visual modality has informed reading instruction debates, the processes underlying braille letter recognition have received comparatively less attention which has led to little input from researchers toward educators. In this study, we first quantified the formal properties of braille dots using measures of cue validity and entropy-based informativeness, and we tested whether the 26 letters of the braille alphabet were linearly separable in the six-dimensional binary space defined by dot presence. We then examined letter discriminability in fluent Spanish braille readers using a same-different task that included all possible letter combinations. From participants' accuracy and response time data, we constructed perceptual similarity matrices and applied hierarchical clustering to characterize the structure of braille letter similarity. The resulting clusters revealed a structured perceptual space that reflected both local dot features and global configurations. These results provide a characterization of the perceptual structure of the braille alphabet and show constraints on tactile letter recognition that extend beyond dot overlap, offering a benchmark to guide experimental control, instructional sequencing of letters, and computational models of tactile letter recognition.
{"title":"Perceptual similarity and clustering in braille letter recognition.","authors":"Zeynep G Özkan, Ana Baciero, Manuel Perea, Pablo Gómez","doi":"10.1186/s41235-025-00690-x","DOIUrl":"10.1186/s41235-025-00690-x","url":null,"abstract":"<p><p>Braille is a tactile writing system that enables individuals to read through the sense of touch. Although letter recognition research in the visual modality has informed reading instruction debates, the processes underlying braille letter recognition have received comparatively less attention which has led to little input from researchers toward educators. In this study, we first quantified the formal properties of braille dots using measures of cue validity and entropy-based informativeness, and we tested whether the 26 letters of the braille alphabet were linearly separable in the six-dimensional binary space defined by dot presence. We then examined letter discriminability in fluent Spanish braille readers using a same-different task that included all possible letter combinations. From participants' accuracy and response time data, we constructed perceptual similarity matrices and applied hierarchical clustering to characterize the structure of braille letter similarity. The resulting clusters revealed a structured perceptual space that reflected both local dot features and global configurations. These results provide a characterization of the perceptual structure of the braille alphabet and show constraints on tactile letter recognition that extend beyond dot overlap, offering a benchmark to guide experimental control, instructional sequencing of letters, and computational models of tactile letter recognition.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"11 1","pages":"12"},"PeriodicalIF":3.1,"publicationDate":"2026-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12882933/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146133210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-05DOI: 10.1186/s41235-026-00707-z
Eesha Kokje, Eva Lermer, Anne-Kathrin Kleine, Susanne Gaube
A primary aim of human-AI teaming is to achieve better collaborative performance than either can achieve alone. Despite considerable efforts in this direction, issues such as overreliance of users on decision aids continue to be a challenge which prevent this. In this study, we evaluated the potential of non-concurrent advice presentation as a strategy to reduce overreliance in a face-matching task. We conducted three pre-registered experiments examining (a) on-demand binary advice, (b) on-demand similarity ratings, and (c) conditional advice (i.e. advice presented only if participants' initial unaided decision is different from the AI prediction), compared to concurrent advice. Across all experiments, we did not find significant differences in the overall performance of participants in the concurrent vs. experimental conditions. But, we found that participants followed AI advice more when they demanded it. Conversely, when they demanded similarity ratings, they followed advice less. Thus on-demand similarity ratings reduced overreliance on AI compared to concurrent similarity ratings presentation. However, overall, similarity ratings were not more helpful compared to basic advice. We also found that participants were less likely to follow AI advice when presented after their initial unaided decision contradicted the AI prediction and were more confident in rejecting incorrect advice, but not as confident when accepting correct advice. Overall, non-concurrent paradigms have potential to reduce overreliance, but at the cost of underreliance on correct advice.
{"title":"AI-augmented decision-making in face matching: comparing concurrent and non-concurrent advice presentation.","authors":"Eesha Kokje, Eva Lermer, Anne-Kathrin Kleine, Susanne Gaube","doi":"10.1186/s41235-026-00707-z","DOIUrl":"10.1186/s41235-026-00707-z","url":null,"abstract":"<p><p>A primary aim of human-AI teaming is to achieve better collaborative performance than either can achieve alone. Despite considerable efforts in this direction, issues such as overreliance of users on decision aids continue to be a challenge which prevent this. In this study, we evaluated the potential of non-concurrent advice presentation as a strategy to reduce overreliance in a face-matching task. We conducted three pre-registered experiments examining (a) on-demand binary advice, (b) on-demand similarity ratings, and (c) conditional advice (i.e. advice presented only if participants' initial unaided decision is different from the AI prediction), compared to concurrent advice. Across all experiments, we did not find significant differences in the overall performance of participants in the concurrent vs. experimental conditions. But, we found that participants followed AI advice more when they demanded it. Conversely, when they demanded similarity ratings, they followed advice less. Thus on-demand similarity ratings reduced overreliance on AI compared to concurrent similarity ratings presentation. However, overall, similarity ratings were not more helpful compared to basic advice. We also found that participants were less likely to follow AI advice when presented after their initial unaided decision contradicted the AI prediction and were more confident in rejecting incorrect advice, but not as confident when accepting correct advice. Overall, non-concurrent paradigms have potential to reduce overreliance, but at the cost of underreliance on correct advice.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"11 1","pages":"11"},"PeriodicalIF":3.1,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12876487/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146126984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-31DOI: 10.1186/s41235-026-00705-1
Matthew D Blanchard, Eugene Aidman, Lazar Stankov, Sabina Kleitman
<p><p>When individuals collaborate, they often rely on momentary estimates of their own and their partner's confidence (decision confidence) to guide collective decisions and achieve their goals. Through interaction, these confidence estimates tend to align over time. This process is known as confidence matching. More stable, dispositional trait confidence is also emerging as a key factor shaping the dynamics and outcomes of collaborative action. We examined how trait confidence and type of communication impact the accuracy of dyadic decisions, decision confidence, and the dynamics of decision confidence, including decision-specific confidence matching. In this study, 210 participants completed general knowledge tests individually and collaboratively, forming 105 dyads. The tests were completed under three communication conditions: isolated (no interaction), passive (viewing the partner's response and numeric confidence rating), and active (verbal discussion). Participants assessed as high-trait or low-trait confidence were allocated to three types of dyads: low-trait (two low-trait members), mixed-trait (one low-trait and one high-trait member), or high-trait (two high-trait members) confidence dyads. Statistically controlling for cognitive ability, trait confidence moderated decision accuracy and decision confidence gains: dyads with mixed-trait or high-trait confidence showed greater decision accuracy improvements in the active than the passive communication condition compared to their individual decisions. Whereas low-trait confidence dyads benefited equally from active and passive communication. Collaboration increased decision confidence overall, especially for high-trait confidence dyads under active communication. Decision-specific confidence matching occurred rapidly in both passive and active communication but predicted decision accuracy gains only in the passive condition where participants had limited social information. Although active verbal communication led to the greatest overall decision accuracy, these gains were not driven by decision-specific confidence matching. Our findings highlight the critical role of trait confidence in shaping collaborative outcomes in dyads and extend previous research by showing that decision-specific confidence matching occurs naturally during verbal communication. SIGNIFICANCE STATEMENT: When two people collaborate to make decisions, we often assume that "two heads are better than one." However, the benefits of dyadic decision-making depend on how effectively group members share and interpret their confidence in judgments. Our study highlights trait confidence, an individual's stable tendency to express confidence, as a critical yet often overlooked factor that shapes the success of dyadic decisions. We found that trait confidence moderates dyadic improvements in both decision accuracy and decision confidence. Importantly, the effectiveness of dyadic collaboration depends on the type of communicati
{"title":"How trait confidence and communication shape dyadic decision outcomes and confidence matching.","authors":"Matthew D Blanchard, Eugene Aidman, Lazar Stankov, Sabina Kleitman","doi":"10.1186/s41235-026-00705-1","DOIUrl":"10.1186/s41235-026-00705-1","url":null,"abstract":"<p><p>When individuals collaborate, they often rely on momentary estimates of their own and their partner's confidence (decision confidence) to guide collective decisions and achieve their goals. Through interaction, these confidence estimates tend to align over time. This process is known as confidence matching. More stable, dispositional trait confidence is also emerging as a key factor shaping the dynamics and outcomes of collaborative action. We examined how trait confidence and type of communication impact the accuracy of dyadic decisions, decision confidence, and the dynamics of decision confidence, including decision-specific confidence matching. In this study, 210 participants completed general knowledge tests individually and collaboratively, forming 105 dyads. The tests were completed under three communication conditions: isolated (no interaction), passive (viewing the partner's response and numeric confidence rating), and active (verbal discussion). Participants assessed as high-trait or low-trait confidence were allocated to three types of dyads: low-trait (two low-trait members), mixed-trait (one low-trait and one high-trait member), or high-trait (two high-trait members) confidence dyads. Statistically controlling for cognitive ability, trait confidence moderated decision accuracy and decision confidence gains: dyads with mixed-trait or high-trait confidence showed greater decision accuracy improvements in the active than the passive communication condition compared to their individual decisions. Whereas low-trait confidence dyads benefited equally from active and passive communication. Collaboration increased decision confidence overall, especially for high-trait confidence dyads under active communication. Decision-specific confidence matching occurred rapidly in both passive and active communication but predicted decision accuracy gains only in the passive condition where participants had limited social information. Although active verbal communication led to the greatest overall decision accuracy, these gains were not driven by decision-specific confidence matching. Our findings highlight the critical role of trait confidence in shaping collaborative outcomes in dyads and extend previous research by showing that decision-specific confidence matching occurs naturally during verbal communication. SIGNIFICANCE STATEMENT: When two people collaborate to make decisions, we often assume that \"two heads are better than one.\" However, the benefits of dyadic decision-making depend on how effectively group members share and interpret their confidence in judgments. Our study highlights trait confidence, an individual's stable tendency to express confidence, as a critical yet often overlooked factor that shapes the success of dyadic decisions. We found that trait confidence moderates dyadic improvements in both decision accuracy and decision confidence. Importantly, the effectiveness of dyadic collaboration depends on the type of communicati","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"11 1","pages":"10"},"PeriodicalIF":3.1,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12860768/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While people are often experts in perceiving and categorizing faces into meaningful social categories (i.e., race), there are suboptimal scenarios such as mask use that may impair face processing. Here we examined how mask use may differentially impact own- and other-race face processing in social categorization, and the underlying neurocognitive mechanisms using simultaneous eye movement and EEG recording. We found that mask use made participants' face scanning patterns more eyes-focused and consistent, and reduced the differences in both eye movement pattern and early attention-related ERP component P1 between viewing own- and other-race faces. Moreover, mask use did not change how people categorize biracial morphed faces, or the advantage in categorization speed of other-race faces. These results suggest that when perceiving masked faces, information from the eye region may be sufficient for social categorization, and that race-based social categorizations can be impervious to mask use. Interestingly, we found that when viewing other-race faces, where people have less perceptual expertise, those who show more consistent face scanning patterns have more efficient processing of masked faces. These findings have important implications for cross-race face perception, especially when face perception condition becomes suboptimal. SIGNIFICANCE STATEMENT: As mask use has become a common practice in response to respiratory virus outbreaks, it has inadvertently altered both health practices and the complex dynamics of social interaction. In a world that values diversity and cross-racial interactions, understanding how masks influence our cognitive processes during cross-race face perception is not just timely but vital. Given this context, we examined the effect of mask use on race categorization, by systematically investigating eye movement behavior, and neural representations of own versus other-race faces, and how these mask-induced changes are associated with each other. By utilizing simultaneous eye movement and EEG recording, our study reveals that the eye region can significantly influence social categorization, suggesting that race-based categorizations persist even in the presence of masks. Interestingly, we found that for other-race faces with which people have less perceptual expertise, those who adjust to a more consistent face scanning pattern for masked faces have more efficient processing of masked faces. This highlights the importance of individuals' visual routine adaptability when the viewing condition is not optimal. Though the current research is called by the demand for COVID-19, our findings can be generalized to a broader context and enhance our understanding of human visual and social cognition.
{"title":"The effect of mask use on cross-race face perception: a simultaneous EEG and eye-tracking study.","authors":"Yueyuan Zheng, Danni Chen, Xiaoqing Hu, Janet Hsiao","doi":"10.1186/s41235-026-00704-2","DOIUrl":"10.1186/s41235-026-00704-2","url":null,"abstract":"<p><p>While people are often experts in perceiving and categorizing faces into meaningful social categories (i.e., race), there are suboptimal scenarios such as mask use that may impair face processing. Here we examined how mask use may differentially impact own- and other-race face processing in social categorization, and the underlying neurocognitive mechanisms using simultaneous eye movement and EEG recording. We found that mask use made participants' face scanning patterns more eyes-focused and consistent, and reduced the differences in both eye movement pattern and early attention-related ERP component P1 between viewing own- and other-race faces. Moreover, mask use did not change how people categorize biracial morphed faces, or the advantage in categorization speed of other-race faces. These results suggest that when perceiving masked faces, information from the eye region may be sufficient for social categorization, and that race-based social categorizations can be impervious to mask use. Interestingly, we found that when viewing other-race faces, where people have less perceptual expertise, those who show more consistent face scanning patterns have more efficient processing of masked faces. These findings have important implications for cross-race face perception, especially when face perception condition becomes suboptimal. SIGNIFICANCE STATEMENT: As mask use has become a common practice in response to respiratory virus outbreaks, it has inadvertently altered both health practices and the complex dynamics of social interaction. In a world that values diversity and cross-racial interactions, understanding how masks influence our cognitive processes during cross-race face perception is not just timely but vital. Given this context, we examined the effect of mask use on race categorization, by systematically investigating eye movement behavior, and neural representations of own versus other-race faces, and how these mask-induced changes are associated with each other. By utilizing simultaneous eye movement and EEG recording, our study reveals that the eye region can significantly influence social categorization, suggesting that race-based categorizations persist even in the presence of masks. Interestingly, we found that for other-race faces with which people have less perceptual expertise, those who adjust to a more consistent face scanning pattern for masked faces have more efficient processing of masked faces. This highlights the importance of individuals' visual routine adaptability when the viewing condition is not optimal. Though the current research is called by the demand for COVID-19, our findings can be generalized to a broader context and enhance our understanding of human visual and social cognition.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"11 1","pages":"9"},"PeriodicalIF":3.1,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12855653/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146087225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15DOI: 10.1186/s41235-025-00697-4
Niki Pennanen, Lauri Oksama
Performing under pressure, particularly in multitasking environments, is a critical challenge in both everyday life and high-stakes professions. This study investigated the differential effects of monitoring and outcome pressure on time-sharing performance and allocation of visual attention. Using a within-subjects design, 30 participants completed a recently devised time-sharing task requiring prioritization under three different pressure conditions. We hypothesized that in a high-demand time-sharing environment, outcome pressure would impair task performance and visual sampling of subtasks more than monitoring pressure. To investigate, we recorded participants' task performance metrics and eye movements. However, our confirmatory analyses found no evidence supporting either hypothesis. In contrast, our additional exploratory analyses revealed that monitoring pressure, not outcome pressure, led to a statistically significant performance decrease. Notably, this effect occurred without changes in visual sampling. This unexpected finding reflects the high sensorimotor demands of the task, specifically the need for precise and rapid mouse movements, which may have been disrupted by the participants' heightened self-consciousness under monitoring pressure. Our findings contribute to the literature on the differential effects of monitoring and outcome pressure, with potential implications for high-stakes domains like military operations. In situations requiring fine motor control-such as piloting aircraft or operating drones-monitoring pressure may disrupt performance, even without altering attentional allocation. Similarly, everyday activities like driving under observation (e.g., driving tests) or performing in front of an audience may be affected. Understanding how pressure disrupts performance in such scenarios can inform training and support strategies to mitigate its impact.
{"title":"Pressure in the spotlight: effects of monitoring pressure and outcome pressure on time-sharing performance.","authors":"Niki Pennanen, Lauri Oksama","doi":"10.1186/s41235-025-00697-4","DOIUrl":"10.1186/s41235-025-00697-4","url":null,"abstract":"<p><p>Performing under pressure, particularly in multitasking environments, is a critical challenge in both everyday life and high-stakes professions. This study investigated the differential effects of monitoring and outcome pressure on time-sharing performance and allocation of visual attention. Using a within-subjects design, 30 participants completed a recently devised time-sharing task requiring prioritization under three different pressure conditions. We hypothesized that in a high-demand time-sharing environment, outcome pressure would impair task performance and visual sampling of subtasks more than monitoring pressure. To investigate, we recorded participants' task performance metrics and eye movements. However, our confirmatory analyses found no evidence supporting either hypothesis. In contrast, our additional exploratory analyses revealed that monitoring pressure, not outcome pressure, led to a statistically significant performance decrease. Notably, this effect occurred without changes in visual sampling. This unexpected finding reflects the high sensorimotor demands of the task, specifically the need for precise and rapid mouse movements, which may have been disrupted by the participants' heightened self-consciousness under monitoring pressure. Our findings contribute to the literature on the differential effects of monitoring and outcome pressure, with potential implications for high-stakes domains like military operations. In situations requiring fine motor control-such as piloting aircraft or operating drones-monitoring pressure may disrupt performance, even without altering attentional allocation. Similarly, everyday activities like driving under observation (e.g., driving tests) or performing in front of an audience may be affected. Understanding how pressure disrupts performance in such scenarios can inform training and support strategies to mitigate its impact.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"11 1","pages":"8"},"PeriodicalIF":3.1,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12808004/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1186/s41235-025-00703-9
Jessica Savoie, Francesca Capozzi, Jelena Ristic
Although gaze following is an important socio-interactive process, little is known about how this behavior is affected when multiple gaze cues are encountered in groups. Emerging research suggests that both visual consistency of cues and group size may play a role. For example, in groups of three, a minority of target-congruent gaze cues (or 1/3 faces looking at the target) have been found to facilitate target responses, whereas in groups of five, a majority of target-congruent gaze cues (or 3/5 faces looking at the target) were needed for the same effect. Here, in two preregistered experiments, we provide a high-powered conceptual replication of these past experiments and extend them to examine the possible uniqueness of responses to gaze using a comparison with arrows. We found that a minority of target-congruent gaze and arrow cues significantly facilitated target responses regardless of group size. Furthermore, we found that additional target-congruent cues, either gaze or arrows, led to further significant response facilitation. Thus, initially, responses were facilitated by a minority proportion of target-congruent cues with response times continuing to decrease with increasing numerosity of cues' spatial consistency toward the target. This suggests that humans may use both quorum-like and numerosity evaluation flexibly to guide responses in contexts presenting with multiple social or non-social cues.
{"title":"Flexible use of quorum and numerosity principles in evaluation of social and non-social cues in group contexts.","authors":"Jessica Savoie, Francesca Capozzi, Jelena Ristic","doi":"10.1186/s41235-025-00703-9","DOIUrl":"10.1186/s41235-025-00703-9","url":null,"abstract":"<p><p>Although gaze following is an important socio-interactive process, little is known about how this behavior is affected when multiple gaze cues are encountered in groups. Emerging research suggests that both visual consistency of cues and group size may play a role. For example, in groups of three, a minority of target-congruent gaze cues (or 1/3 faces looking at the target) have been found to facilitate target responses, whereas in groups of five, a majority of target-congruent gaze cues (or 3/5 faces looking at the target) were needed for the same effect. Here, in two preregistered experiments, we provide a high-powered conceptual replication of these past experiments and extend them to examine the possible uniqueness of responses to gaze using a comparison with arrows. We found that a minority of target-congruent gaze and arrow cues significantly facilitated target responses regardless of group size. Furthermore, we found that additional target-congruent cues, either gaze or arrows, led to further significant response facilitation. Thus, initially, responses were facilitated by a minority proportion of target-congruent cues with response times continuing to decrease with increasing numerosity of cues' spatial consistency toward the target. This suggests that humans may use both quorum-like and numerosity evaluation flexibly to guide responses in contexts presenting with multiple social or non-social cues.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"11 1","pages":"7"},"PeriodicalIF":3.1,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12791076/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145953216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-08DOI: 10.1186/s41235-025-00702-w
Mark W Becker, Andrew Rodriguez, Derrek T Montalvo, Chad Peltier
As targets become rare in visual search tasks, the likelihood of missing them increases-a phenomenon known as the low-prevalence effect (LPE). This has important implications for real-world searches, but reducing the LPE has proven challenging. In Experiment 1, we used a low-prevalence T-among-Ls task and found that distributing "probe" trials-trials with known targets and post-response feedback-reduced the LPE. In Experiment 2, participants searched for two low-prevalence targets (T and O among Ls and Qs), and we varied how often each appeared in probe trials. The probe benefit scaled with the frequency of the matching target, suggesting limited generalizability to non-probed targets. Experiment 3 used eye tracking to examine whether probes affected quitting thresholds, decision criteria, or guidance. Results showed that probes biased top-down guidance toward features of frequently probed targets, without affecting the number of items inspected or the decision criterion. In Experiment 4, we tested whether feedback was necessary for the probe benefit. Findings suggest that probes improve rare-target search by altering perceived prevalence, not through feedback alone. Overall, probes may reduce the LPE by increasing perceived prevalence and thereby increasing search guidance, but only when probe targets closely match actual search targets.
当目标在视觉搜索任务中变得罕见时,丢失它们的可能性就会增加——这种现象被称为低流行率效应(LPE)。这对现实世界的搜索具有重要意义,但事实证明,降低LPE具有挑战性。在实验1中,我们使用了一个低流行率的t- In - ls任务,并发现分配“探针”试验-具有已知目标和反应后反馈的试验-降低了LPE。在实验2中,参与者搜索两个低患病率目标(l和q中的T和O),我们改变了每个目标在探针试验中出现的频率。探测效益随匹配目标的频率而增加,表明对非探测目标的泛化性有限。实验3使用眼动追踪来检验探针是否影响退出阈值、决策标准或指导。结果表明,探针偏向于对频繁探测目标的特征进行自上而下的引导,但不影响被探测项目的数量或决策标准。在实验4中,我们测试了反馈对于探针的好处是否必要。研究结果表明,探针通过改变感知的患病率来改善稀有目标搜索,而不仅仅是通过反馈。总体而言,探针可以通过增加感知患病率从而增加搜索引导来降低LPE,但只有当探针目标与实际搜索目标密切匹配时才会如此。
{"title":"Reducing the low-prevalence effect with probe trials.","authors":"Mark W Becker, Andrew Rodriguez, Derrek T Montalvo, Chad Peltier","doi":"10.1186/s41235-025-00702-w","DOIUrl":"10.1186/s41235-025-00702-w","url":null,"abstract":"<p><p>As targets become rare in visual search tasks, the likelihood of missing them increases-a phenomenon known as the low-prevalence effect (LPE). This has important implications for real-world searches, but reducing the LPE has proven challenging. In Experiment 1, we used a low-prevalence T-among-Ls task and found that distributing \"probe\" trials-trials with known targets and post-response feedback-reduced the LPE. In Experiment 2, participants searched for two low-prevalence targets (T and O among Ls and Qs), and we varied how often each appeared in probe trials. The probe benefit scaled with the frequency of the matching target, suggesting limited generalizability to non-probed targets. Experiment 3 used eye tracking to examine whether probes affected quitting thresholds, decision criteria, or guidance. Results showed that probes biased top-down guidance toward features of frequently probed targets, without affecting the number of items inspected or the decision criterion. In Experiment 4, we tested whether feedback was necessary for the probe benefit. Findings suggest that probes improve rare-target search by altering perceived prevalence, not through feedback alone. Overall, probes may reduce the LPE by increasing perceived prevalence and thereby increasing search guidance, but only when probe targets closely match actual search targets.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"11 1","pages":"5"},"PeriodicalIF":3.1,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12779849/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145919101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-08DOI: 10.1186/s41235-025-00695-6
Matthew B Thompson, Varun Gandhi, Alexandra Richardson-Newton, Guillermo Campitelli
Professions such as military, aviation, submarine operation, and emergency response require individuals to navigate complex environments characterized by limited information, stringent time constraints, and significant pressures. Effective decision making under pressure is crucial in safety-critical professions, yet measuring this expertise remains challenging. Inspired by the military context, this article introduces the virtual reality decision-making expertise (VR-DMX) environment, designed to evaluate decision-making expertise under time constraints within a virtual reality scenario. VR-DMX simulates an amusement arcade where users must decide how to allocate time across various games to maximize ticket earnings. Through two validation studies (N = 60 and N = 76), we examined two metrics: Total Tickets (measuring overall performance) and DMX score (isolating decision-making quality). Both metrics demonstrated symmetrical distributions without floor or ceiling effects, with coefficients of variation comparable to established individual difference measures (32.4-37.4% for Total Tickets; 20.8-27.6% for DMX score). The moderate correlation between metrics (meta-analysis r = 0.771, 95% CI [0.599, 0.943]) indicates they measure related but distinct constructs. Our findings indicate that VR-DMX effectively differentiates individual performance levels and captures a distinct decision-making component that is separate from general cognitive abilities. Comparing decision-making expertise between professionals in safety-critical fields with those without such experience would be a sensible next step to help validate the potential for selection and training applications. VR-DMX was designed to measure decision-making expertise in safety-critical contexts, and initial validation data demonstrating effective differentiation of individual performance levels suggest that continued development could fulfill this design intention for applications in selection, training, and performance prediction.
{"title":"Detecting expertise in decision making under pressure: a virtual reality assessment environment and empirical evaluation.","authors":"Matthew B Thompson, Varun Gandhi, Alexandra Richardson-Newton, Guillermo Campitelli","doi":"10.1186/s41235-025-00695-6","DOIUrl":"10.1186/s41235-025-00695-6","url":null,"abstract":"<p><p>Professions such as military, aviation, submarine operation, and emergency response require individuals to navigate complex environments characterized by limited information, stringent time constraints, and significant pressures. Effective decision making under pressure is crucial in safety-critical professions, yet measuring this expertise remains challenging. Inspired by the military context, this article introduces the virtual reality decision-making expertise (VR-DMX) environment, designed to evaluate decision-making expertise under time constraints within a virtual reality scenario. VR-DMX simulates an amusement arcade where users must decide how to allocate time across various games to maximize ticket earnings. Through two validation studies (N = 60 and N = 76), we examined two metrics: Total Tickets (measuring overall performance) and DMX score (isolating decision-making quality). Both metrics demonstrated symmetrical distributions without floor or ceiling effects, with coefficients of variation comparable to established individual difference measures (32.4-37.4% for Total Tickets; 20.8-27.6% for DMX score). The moderate correlation between metrics (meta-analysis r = 0.771, 95% CI [0.599, 0.943]) indicates they measure related but distinct constructs. Our findings indicate that VR-DMX effectively differentiates individual performance levels and captures a distinct decision-making component that is separate from general cognitive abilities. Comparing decision-making expertise between professionals in safety-critical fields with those without such experience would be a sensible next step to help validate the potential for selection and training applications. VR-DMX was designed to measure decision-making expertise in safety-critical contexts, and initial validation data demonstrating effective differentiation of individual performance levels suggest that continued development could fulfill this design intention for applications in selection, training, and performance prediction.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"11 1","pages":"6"},"PeriodicalIF":3.1,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12783459/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145935407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1186/s41235-025-00700-y
Didem Pehlivanoglu, Mengdi Zhu, Jialong Zhen, Aude A Gagnon-Roberge, Rebecca K Kern, Damon Woodard, Brian S Cahill, Natalie C Ebner
Deepfakes are synthetic media created by deep-generative methods to fake a person's audio-visual representation. Growing sophistication of deepfake technology poses significant challenges for both machine learning (ML) algorithms and humans. Here we used real and deepfake static face images (Study 1) and dynamic videos (Study 2) to (i) investigate sources of misclassification errors in machines, (ii) identify psychological mechanisms underlying detection performance in humans, and (iii) compare humans and machines in their classification decision accuracy and confidence. Study 1 found that machines achieved excellent performance in classifying real and deepfake images, with good accuracy in feature classification. Humans, in contrast, experienced challenges in distinguishing between real and deepfake static images. Their classification accuracy was at chance level, and this underperformance relative to machines was accompanied by a truth bias and low confidence for the detection of deepfake images. Using dynamic video stimuli, Study 2 found that performance of machines was near chance level, with poor feature classification. Further, machines showed greater lie bias and reduced decision confidence relative to humans who outperformed machines in the detection of video deepfakes. Finally, Study 2 revealed that higher analytical thinking, lower positive affect, and greater internet skills were associated with better video deepfake detection in humans. Combined, the findings across these two studies advance understanding of factors contributing to deepfake detection in both machines and humans; and can inform intervention toward tackling the growing threat from deepfakes by identifying areas of particular benefit from human-AI collaboration to optimize the detection of deepfakes.
{"title":"Is this real? Susceptibility to deepfakes in machines and humans.","authors":"Didem Pehlivanoglu, Mengdi Zhu, Jialong Zhen, Aude A Gagnon-Roberge, Rebecca K Kern, Damon Woodard, Brian S Cahill, Natalie C Ebner","doi":"10.1186/s41235-025-00700-y","DOIUrl":"10.1186/s41235-025-00700-y","url":null,"abstract":"<p><p>Deepfakes are synthetic media created by deep-generative methods to fake a person's audio-visual representation. Growing sophistication of deepfake technology poses significant challenges for both machine learning (ML) algorithms and humans. Here we used real and deepfake static face images (Study 1) and dynamic videos (Study 2) to (i) investigate sources of misclassification errors in machines, (ii) identify psychological mechanisms underlying detection performance in humans, and (iii) compare humans and machines in their classification decision accuracy and confidence. Study 1 found that machines achieved excellent performance in classifying real and deepfake images, with good accuracy in feature classification. Humans, in contrast, experienced challenges in distinguishing between real and deepfake static images. Their classification accuracy was at chance level, and this underperformance relative to machines was accompanied by a truth bias and low confidence for the detection of deepfake images. Using dynamic video stimuli, Study 2 found that performance of machines was near chance level, with poor feature classification. Further, machines showed greater lie bias and reduced decision confidence relative to humans who outperformed machines in the detection of video deepfakes. Finally, Study 2 revealed that higher analytical thinking, lower positive affect, and greater internet skills were associated with better video deepfake detection in humans. Combined, the findings across these two studies advance understanding of factors contributing to deepfake detection in both machines and humans; and can inform intervention toward tackling the growing threat from deepfakes by identifying areas of particular benefit from human-AI collaboration to optimize the detection of deepfakes.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"11 1","pages":"3"},"PeriodicalIF":3.1,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12779810/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145919131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1186/s41235-025-00701-x
Eesha Kokje, Eva Lermer, Christopher Donkin, Susanne Gaube
There has been a steep rise in the use of decision aids enabled by artificial intelligence (AI) for facial identity verification. The use of such systems and the impact of their implementation on human decision-making is still not well understood. The current study aimed to explore factors associated with the design of the paradigm and of the presentation of AI-enabled predictions. Across three pre-registered experiments, we examined the impact of (a) stated AI accuracy, (b) mismatch frequency (i.e. proportion of match and mismatch pairs), and (c) advice type (binary only vs. binary + similarity rating) on performance in a one-to-one face matching task. Participants' performance generally improved when aided by AI compared to a baseline without decision support. The largest improvement was observed when no information on the AI's overall accuracy was provided. Further, the frequency of mismatches did not influence performance, but biased responses. Finally, similarity ratings marginally improved overall performance and increased users' certainty in their decisions, but did not help to dismiss inaccurate predictions. Additionally, two findings were consistent across all experiments. First, participants often failed to dismiss inaccurate AI predictions, resulting in significantly lower performance compared to accurate predictions. Second, on a group level, the human-AI team did not outperform the AI alone, though examination of individual performance showed that some participants were able to exceed the AI's accuracy. These findings contribute towards determining appropriate design formats for AI prediction in a human-in-the-loop system, so that the performance of the human-AI team can be maximised.
{"title":"Understanding the influence of design-related factors on human-AI teaming in a face matching task.","authors":"Eesha Kokje, Eva Lermer, Christopher Donkin, Susanne Gaube","doi":"10.1186/s41235-025-00701-x","DOIUrl":"10.1186/s41235-025-00701-x","url":null,"abstract":"<p><p>There has been a steep rise in the use of decision aids enabled by artificial intelligence (AI) for facial identity verification. The use of such systems and the impact of their implementation on human decision-making is still not well understood. The current study aimed to explore factors associated with the design of the paradigm and of the presentation of AI-enabled predictions. Across three pre-registered experiments, we examined the impact of (a) stated AI accuracy, (b) mismatch frequency (i.e. proportion of match and mismatch pairs), and (c) advice type (binary only vs. binary + similarity rating) on performance in a one-to-one face matching task. Participants' performance generally improved when aided by AI compared to a baseline without decision support. The largest improvement was observed when no information on the AI's overall accuracy was provided. Further, the frequency of mismatches did not influence performance, but biased responses. Finally, similarity ratings marginally improved overall performance and increased users' certainty in their decisions, but did not help to dismiss inaccurate predictions. Additionally, two findings were consistent across all experiments. First, participants often failed to dismiss inaccurate AI predictions, resulting in significantly lower performance compared to accurate predictions. Second, on a group level, the human-AI team did not outperform the AI alone, though examination of individual performance showed that some participants were able to exceed the AI's accuracy. These findings contribute towards determining appropriate design formats for AI prediction in a human-in-the-loop system, so that the performance of the human-AI team can be maximised.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"11 1","pages":"4"},"PeriodicalIF":3.1,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12779867/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145919050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}