Pub Date : 2025-08-26DOI: 10.1186/s41235-025-00666-x
Jipeng Duan, Yinfeng Hu, Wenying Zhou, Qingqing Ye, Ting Zhao, Jun Yin
People tend to generalize the actions of known group members to new ones when they are both members of the same group. This study was conducted to investigate how the prevalence of specific actions among multiple individuals determines action generalization within social groups. We propose that people rely on the belief that group members work toward a shared goal (i.e., shared-goal belief) to guide action generalization. Consequently, the extent of action generalization may not consistently increase with the sampled prevalence of group members performing the same goal-directed action, resulting in a deviation from graded action generalization (i.e., nongraded action generalization). Experiment 1 revealed that the more participants believed that group members pursued a shared goal, the greater the likelihood that nongraded action generalization would occur. In Experiment 2, experimental manipulation weakened the strength of the shared-goal belief and led to a graded pattern of action generalization with accumulated evidence of action prevalence. These findings suggest that a shared-goal belief within groups significantly shapes action generalization beyond the mere influence of sampled action prevalence. Social groups not only provide a framework for selecting evidence for action generalization but also shape prior beliefs that influence our expectations of others' actions.
{"title":"Beyond evidence accumulation: shared-goal belief guides action generalization in social groups.","authors":"Jipeng Duan, Yinfeng Hu, Wenying Zhou, Qingqing Ye, Ting Zhao, Jun Yin","doi":"10.1186/s41235-025-00666-x","DOIUrl":"https://doi.org/10.1186/s41235-025-00666-x","url":null,"abstract":"<p><p>People tend to generalize the actions of known group members to new ones when they are both members of the same group. This study was conducted to investigate how the prevalence of specific actions among multiple individuals determines action generalization within social groups. We propose that people rely on the belief that group members work toward a shared goal (i.e., shared-goal belief) to guide action generalization. Consequently, the extent of action generalization may not consistently increase with the sampled prevalence of group members performing the same goal-directed action, resulting in a deviation from graded action generalization (i.e., nongraded action generalization). Experiment 1 revealed that the more participants believed that group members pursued a shared goal, the greater the likelihood that nongraded action generalization would occur. In Experiment 2, experimental manipulation weakened the strength of the shared-goal belief and led to a graded pattern of action generalization with accumulated evidence of action prevalence. These findings suggest that a shared-goal belief within groups significantly shapes action generalization beyond the mere influence of sampled action prevalence. Social groups not only provide a framework for selecting evidence for action generalization but also shape prior beliefs that influence our expectations of others' actions.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"56"},"PeriodicalIF":3.1,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12380659/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144973803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-22DOI: 10.1186/s41235-025-00657-y
Anthony J Ries, Chloe Callahan-Flintoft, Anna Madison, Louis Dankovich, Jonathan Touryan
In military operations, rapid and accurate decision-making is crucial, especially in visually complex and high-pressure environments. This study investigates how eye and head movement metrics can infer changes in search behavior during a naturalistic shooting scenario in virtual reality (VR). Thirty-one participants performed a foraging search task using a head-mounted display (HMD) with integrated eye tracking. Participants searched for targets among distractors under varying levels of target discriminability (easy vs. hard) and time pressure (low vs. high). As expected, behavioral results indicated that increased discrimination difficulty and greater time pressure negatively impacted performance, leading to slower response times and reduced d-prime. Support vector classifiers assigned a search condition, discriminability and time pressure, to each trial based on eye and head movement features. Combined eye and head features produced the most accurate classification model for capturing tasked-induced changes in search behavior, with the combined model outperforming those based on eye or head features alone. While eye features demonstrated strong predictive power, the inclusion of head features significantly enhanced model performance. Across the ensemble of eye metrics, fixation-related features were the most robust for classifying target discriminability, while saccadic-related features played a similar role for time pressure. In contrast, models constrained to head metrics emphasized global movement (amplitude, velocity) for classifying discriminability but shifted toward kinematic intensity (acceleration, jerk) in time pressure condition. Together these results speak to the complementary role of eye and head movements in understanding search behavior under changing task parameters.
{"title":"Decoding target discriminability and time pressure using eye and head movement features in a foraging search task.","authors":"Anthony J Ries, Chloe Callahan-Flintoft, Anna Madison, Louis Dankovich, Jonathan Touryan","doi":"10.1186/s41235-025-00657-y","DOIUrl":"10.1186/s41235-025-00657-y","url":null,"abstract":"<p><p>In military operations, rapid and accurate decision-making is crucial, especially in visually complex and high-pressure environments. This study investigates how eye and head movement metrics can infer changes in search behavior during a naturalistic shooting scenario in virtual reality (VR). Thirty-one participants performed a foraging search task using a head-mounted display (HMD) with integrated eye tracking. Participants searched for targets among distractors under varying levels of target discriminability (easy vs. hard) and time pressure (low vs. high). As expected, behavioral results indicated that increased discrimination difficulty and greater time pressure negatively impacted performance, leading to slower response times and reduced d-prime. Support vector classifiers assigned a search condition, discriminability and time pressure, to each trial based on eye and head movement features. Combined eye and head features produced the most accurate classification model for capturing tasked-induced changes in search behavior, with the combined model outperforming those based on eye or head features alone. While eye features demonstrated strong predictive power, the inclusion of head features significantly enhanced model performance. Across the ensemble of eye metrics, fixation-related features were the most robust for classifying target discriminability, while saccadic-related features played a similar role for time pressure. In contrast, models constrained to head metrics emphasized global movement (amplitude, velocity) for classifying discriminability but shifted toward kinematic intensity (acceleration, jerk) in time pressure condition. Together these results speak to the complementary role of eye and head movements in understanding search behavior under changing task parameters.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"53"},"PeriodicalIF":3.1,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12373606/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144973841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-22DOI: 10.1186/s41235-025-00652-3
Yueran Yang, Janice L Burke, Justice Healy
How do witnesses make identification decisions when viewing a lineup? Understanding the witness decision-making process is essential for researchers to develop methods that can reduce mistaken identifications and improve lineup practices. Yet, the inclusion of fillers has posed a pivotal challenge to this task because the traditional signal detection theory is only applicable to binary decisions and cannot easily incorporate lineup fillers. This paper proposes a multi-item signal detection theory (mSDT) model to help understand the witness decision-making process. The mSDT model clarifies the importance of considering the joint distributions of suspect and filler signals. The model also visualizes the joint distributions in a multivariate decision space, which allows for the incorporation of all eyewitness responses, including suspect identifications, filler identifications, and rejections. The paper begins with a set of simple assumptions to develop the mSDT model and then explores alternative assumptions that can potentially accommodate more sophisticated considerations. The paper further discusses the implications of the mSDT model. With a mathematical modeling and visualization approach, the mSDT model provides a novel theoretical framework for understanding eyewitness identification decisions and addressing debates around eyewitness SDT and ROC applications.
{"title":"A multi-item signal detection theory model for eyewitness identification.","authors":"Yueran Yang, Janice L Burke, Justice Healy","doi":"10.1186/s41235-025-00652-3","DOIUrl":"https://doi.org/10.1186/s41235-025-00652-3","url":null,"abstract":"<p><p>How do witnesses make identification decisions when viewing a lineup? Understanding the witness decision-making process is essential for researchers to develop methods that can reduce mistaken identifications and improve lineup practices. Yet, the inclusion of fillers has posed a pivotal challenge to this task because the traditional signal detection theory is only applicable to binary decisions and cannot easily incorporate lineup fillers. This paper proposes a multi-item signal detection theory (mSDT) model to help understand the witness decision-making process. The mSDT model clarifies the importance of considering the joint distributions of suspect and filler signals. The model also visualizes the joint distributions in a multivariate decision space, which allows for the incorporation of all eyewitness responses, including suspect identifications, filler identifications, and rejections. The paper begins with a set of simple assumptions to develop the mSDT model and then explores alternative assumptions that can potentially accommodate more sophisticated considerations. The paper further discusses the implications of the mSDT model. With a mathematical modeling and visualization approach, the mSDT model provides a novel theoretical framework for understanding eyewitness identification decisions and addressing debates around eyewitness SDT and ROC applications.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"54"},"PeriodicalIF":3.1,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12373971/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144973697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-22DOI: 10.1186/s41235-025-00664-z
Angelo G Gaillet, Clara Suied, Gabriel Arnold, Marine Taffou
There is ample evidence from cognitive sciences and neurosciences studies that multisensory stimuli are detected better and faster than their unisensory counterparts. Yet, most of this work has been conducted in settings and with protocols within which participants had the sole detection task to perform. In realistic and complex environments, such as military ones, detection of critical information has to be performed while the operator is concurrently managing several others tasks and processing a vast amount of sensory inputs. To date, it remains to determine whether multisensory benefits for detection hold true in complex multitasking situations. In the present study, we compared the detection performance of healthy participants when the target was only auditory, only tactile, or both auditory and tactile. Detection performance was measured in a simple detection task condition and in a multitasking condition. In the latter, participants had to detect the targets while concurrently performing the subtasks of the MATB-II environment, designed in the 90s by NASA to simulate piloting tasks. Multisensory acceleration of reaction times was larger during multitasking compared to single-task conditions. Crucially, participants detected auditory-tactile targets faster than their unisensory counterparts. While previous studies have reported such facilitation effects in single-task contexts, our results show that multisensory facilitation of detection speed does occur in a realistic multitasking environment and is larger than in simple task conditions. Auditory-tactile displays seem to have the potential to enhance information presentation and could be used in applied settings like military aviation.
{"title":"Auditory-tactile presentation accelerates target detection in a multitasking situation.","authors":"Angelo G Gaillet, Clara Suied, Gabriel Arnold, Marine Taffou","doi":"10.1186/s41235-025-00664-z","DOIUrl":"https://doi.org/10.1186/s41235-025-00664-z","url":null,"abstract":"<p><p>There is ample evidence from cognitive sciences and neurosciences studies that multisensory stimuli are detected better and faster than their unisensory counterparts. Yet, most of this work has been conducted in settings and with protocols within which participants had the sole detection task to perform. In realistic and complex environments, such as military ones, detection of critical information has to be performed while the operator is concurrently managing several others tasks and processing a vast amount of sensory inputs. To date, it remains to determine whether multisensory benefits for detection hold true in complex multitasking situations. In the present study, we compared the detection performance of healthy participants when the target was only auditory, only tactile, or both auditory and tactile. Detection performance was measured in a simple detection task condition and in a multitasking condition. In the latter, participants had to detect the targets while concurrently performing the subtasks of the MATB-II environment, designed in the 90s by NASA to simulate piloting tasks. Multisensory acceleration of reaction times was larger during multitasking compared to single-task conditions. Crucially, participants detected auditory-tactile targets faster than their unisensory counterparts. While previous studies have reported such facilitation effects in single-task contexts, our results show that multisensory facilitation of detection speed does occur in a realistic multitasking environment and is larger than in simple task conditions. Auditory-tactile displays seem to have the potential to enhance information presentation and could be used in applied settings like military aviation.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"52"},"PeriodicalIF":3.1,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12373581/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144973681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-20DOI: 10.1186/s41235-025-00660-3
Joseph T Coyne, Laura Jamison, Kaylin Strong, Ciara Sibley, Cyrus Foroughi, Sarah Melick
<p><p>This paper looks at how process-based spatial ability and attention measures taken within a high-stakes battery used to select pilots in the US Navy compare to lab-based measures of the same constructs. Process-based measures typically function by having individuals perform either a novel task or perform a task with novel stimuli. However, applicants often spend time practicing the tasks prior to taking the battery. A group of 307 Naval Flight Students participated in the study, in which they took several spatial ability, attention and general processing measures. One of the spatial tasks used in the study was the same as the spatial task in the Navy's pilot selection battery, which all of the participants had taken. All of the lab spatial ability measures including the one used in the selection battery were highly correlated and loaded onto the same spatial ability factor. However, the high-stakes spatial subtest was not correlated with any of the lab spatial measures including the same test administered in the lab. The lab spatial ability data was also correlated with training outcomes whereas the high-stakes process spatial and attention measures were not. The high-stakes attention measure was weakly correlated with some of the general processing measures. The pattern of results suggest that familiarity with the spatial and attention tasks in the high-stakes environment may be negating those tests ability to measure the constructs they were designed to measure, and also reducing their effectiveness to predict training performance. Statement of significance: This paper addresses an increasingly difficult challenge the Navy is facing within aviation selection, in that applicants are highly motivated and have access to unofficial replicas of the Navy's test battery. The challenge is specific to the process-based measures such as spatial ability and attention that rely on some degree of novelty to work. When applicants practice these types of tests they can practice to the test, memorize items, and learn strategies which impact the test's ability to measure the cognitive construct it was designed to measure as well as reduces its ability to predict flight training outcomes. This is particularly problematic as the unofficial test preparation software can replicate a new test within days. While the data presented here are limited to spatial ability and attention within military pilot selection it applies to a much broader community of researchers. Anyone developing a high-stakes test with a large and motivated applicant pool may also see their process-based measures perform differently in a high-stakes environment than a low stakes laboratory one in which participants are naïve to the tasks they are taking. The extent to which practice can alter the effectiveness of high-stakes test performance is an important one. The results of the paper suggest that test developers should assume participants are practiced and assess the extent to which prac
{"title":"Process-based measures in high-stakes testing: practical implications for construct validity within military aviation selection.","authors":"Joseph T Coyne, Laura Jamison, Kaylin Strong, Ciara Sibley, Cyrus Foroughi, Sarah Melick","doi":"10.1186/s41235-025-00660-3","DOIUrl":"10.1186/s41235-025-00660-3","url":null,"abstract":"<p><p>This paper looks at how process-based spatial ability and attention measures taken within a high-stakes battery used to select pilots in the US Navy compare to lab-based measures of the same constructs. Process-based measures typically function by having individuals perform either a novel task or perform a task with novel stimuli. However, applicants often spend time practicing the tasks prior to taking the battery. A group of 307 Naval Flight Students participated in the study, in which they took several spatial ability, attention and general processing measures. One of the spatial tasks used in the study was the same as the spatial task in the Navy's pilot selection battery, which all of the participants had taken. All of the lab spatial ability measures including the one used in the selection battery were highly correlated and loaded onto the same spatial ability factor. However, the high-stakes spatial subtest was not correlated with any of the lab spatial measures including the same test administered in the lab. The lab spatial ability data was also correlated with training outcomes whereas the high-stakes process spatial and attention measures were not. The high-stakes attention measure was weakly correlated with some of the general processing measures. The pattern of results suggest that familiarity with the spatial and attention tasks in the high-stakes environment may be negating those tests ability to measure the constructs they were designed to measure, and also reducing their effectiveness to predict training performance. Statement of significance: This paper addresses an increasingly difficult challenge the Navy is facing within aviation selection, in that applicants are highly motivated and have access to unofficial replicas of the Navy's test battery. The challenge is specific to the process-based measures such as spatial ability and attention that rely on some degree of novelty to work. When applicants practice these types of tests they can practice to the test, memorize items, and learn strategies which impact the test's ability to measure the cognitive construct it was designed to measure as well as reduces its ability to predict flight training outcomes. This is particularly problematic as the unofficial test preparation software can replicate a new test within days. While the data presented here are limited to spatial ability and attention within military pilot selection it applies to a much broader community of researchers. Anyone developing a high-stakes test with a large and motivated applicant pool may also see their process-based measures perform differently in a high-stakes environment than a low stakes laboratory one in which participants are naïve to the tasks they are taking. The extent to which practice can alter the effectiveness of high-stakes test performance is an important one. The results of the paper suggest that test developers should assume participants are practiced and assess the extent to which prac","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"51"},"PeriodicalIF":3.1,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12364792/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144884100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-15DOI: 10.1186/s41235-025-00663-0
Jieun Cho, Jeunghwan Choi, Cheongil Kim, Jeong Hyeon Park, Sang Chul Chong
In today's digital world, understanding refresh-rate implications on visual perception and energy consumption is crucial. While high refresh rates enhance motion perception and user experience, they also increase energy usage, prompting the need for adaptive solutions like variable refresh rates. This study examines whether users notice or are affected by reduced refresh rates in task-irrelevant areas and examine whether variable refresh rates compromise a satisfactory display experience. Most participants failed to detect decreases in refresh rate in their peripheral view, and their task performance of the main task, which required sustained attention, remained unaffected. However, when informed of the possible change in the periphery, detection of it improved. In addition, during out-of-the-zone states, people with expectations about the phenomenon may be more likely to falsely report the change in the display. The findings suggest that centrally focused attention limits awareness of peripheral refresh-rate changes, supporting the potential of multi-refresh-rate strategies to optimize energy efficiency without compromising user experience.
{"title":"Effect of multi-refresh-rate method on user experience: sustained attention and inattentional blindness.","authors":"Jieun Cho, Jeunghwan Choi, Cheongil Kim, Jeong Hyeon Park, Sang Chul Chong","doi":"10.1186/s41235-025-00663-0","DOIUrl":"10.1186/s41235-025-00663-0","url":null,"abstract":"<p><p>In today's digital world, understanding refresh-rate implications on visual perception and energy consumption is crucial. While high refresh rates enhance motion perception and user experience, they also increase energy usage, prompting the need for adaptive solutions like variable refresh rates. This study examines whether users notice or are affected by reduced refresh rates in task-irrelevant areas and examine whether variable refresh rates compromise a satisfactory display experience. Most participants failed to detect decreases in refresh rate in their peripheral view, and their task performance of the main task, which required sustained attention, remained unaffected. However, when informed of the possible change in the periphery, detection of it improved. In addition, during out-of-the-zone states, people with expectations about the phenomenon may be more likely to falsely report the change in the display. The findings suggest that centrally focused attention limits awareness of peripheral refresh-rate changes, supporting the potential of multi-refresh-rate strategies to optimize energy efficiency without compromising user experience.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"50"},"PeriodicalIF":3.1,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12356812/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144856742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-15DOI: 10.1186/s41235-025-00653-2
Krystina Diaz, Mark W Becker, Chad Peltier, Jeffrey B Bolkhovsky
Visual search performance is a critical factor in many high-stakes duties, warranting the need for strategies to enhance target detection accuracy. Research using rapid serial visual presentation (RSVP) of stimuli shows that observers can detect categorically defined, pre-specified targets even when the presentation rate is rapid, suggesting RSVP as a viable strategy. To investigate how and how well RSVP can improve target detection in complex search arrays, five experiments were conducted to compare search performance between Full-Image search conditions and various RSVP-based conditions. Stimulus presentation time/total search time was the same across conditions. Experiment 1 demonstrated the utility of RSVP to enhance target identification in simple arrays (i.e., Landolt Cs). Experiment 2 involved more complex scenes and target-present/-absent judgments. Results showed that RSVP increased target detections due to both a liberal change in criterion and an increase in sensitivity. Experiment 3 provides some evidence against the reduction in peripheral clutter as the primary contributor to RSVP performance increases. Experiments 4 and 5 prompted and limited eye movements, respectively, to distinguish the role of eye movements in RSVP-based search. These two latter experiments imply that lower target detection performance under time constraints in whole image search conditions is attributable to time-wasting, irrelevant and inefficient eye movements. These experiments suggest that RSVP advantage occurs because the method maximizes time for inspecting and processing each search image/segment. Real-world visual search tasks may benefit from segmenting the search display and presenting images in an RSVP stream.
{"title":"Presenting segmented images in a rapid serial visual presentation stream improves search accuracy.","authors":"Krystina Diaz, Mark W Becker, Chad Peltier, Jeffrey B Bolkhovsky","doi":"10.1186/s41235-025-00653-2","DOIUrl":"10.1186/s41235-025-00653-2","url":null,"abstract":"<p><p>Visual search performance is a critical factor in many high-stakes duties, warranting the need for strategies to enhance target detection accuracy. Research using rapid serial visual presentation (RSVP) of stimuli shows that observers can detect categorically defined, pre-specified targets even when the presentation rate is rapid, suggesting RSVP as a viable strategy. To investigate how and how well RSVP can improve target detection in complex search arrays, five experiments were conducted to compare search performance between Full-Image search conditions and various RSVP-based conditions. Stimulus presentation time/total search time was the same across conditions. Experiment 1 demonstrated the utility of RSVP to enhance target identification in simple arrays (i.e., Landolt Cs). Experiment 2 involved more complex scenes and target-present/-absent judgments. Results showed that RSVP increased target detections due to both a liberal change in criterion and an increase in sensitivity. Experiment 3 provides some evidence against the reduction in peripheral clutter as the primary contributor to RSVP performance increases. Experiments 4 and 5 prompted and limited eye movements, respectively, to distinguish the role of eye movements in RSVP-based search. These two latter experiments imply that lower target detection performance under time constraints in whole image search conditions is attributable to time-wasting, irrelevant and inefficient eye movements. These experiments suggest that RSVP advantage occurs because the method maximizes time for inspecting and processing each search image/segment. Real-world visual search tasks may benefit from segmenting the search display and presenting images in an RSVP stream.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"49"},"PeriodicalIF":3.1,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12356790/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144856743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-07DOI: 10.1186/s41235-025-00654-1
Ece Yüksel, Zachary Boogaart, Steven M Weisberg
Spatial navigation relies on extracting environmental information to determine where to go. To support navigation behavior, navigational aids, such as maps, compasses, or global positioning systems (GPSs), offer access to easily extractible information, but do these aids enhance spatial memory? Here, we propose the hypothesis that navigation aids support navigation behavior when they are available but do not necessarily enhance navigation by improving the memory of a space. For example, a compass provides a global reference direction and bearing, showing where north is but may not result in a more accurate representation of an environment without the compass. We present two experiments evaluating whether people learned a large-scale, immersive virtual environment better when provided with a global reference direction. We explored whether participants used the provided reference direction to anchor their mental representation of the environment, i.e., whether their alignment of their mental map matched the cued direction. In the first (preregistered) experiment, we found no evidence of a difference in spatial memory performance between those with the compass available and those without (n = 54). The second experiment (n = 67) also revealed no difference in participants' environmental knowledge between a compass condition or a mountain range, which provided a global directional cue in a more salient and concrete form. The exploratory results revealed that the participants did not use either cue as a reference direction. Our results inform theories on how reference directions support navigation and, more broadly, how external cues are incorporated (or not) into cognitive representations.
{"title":"This is not the way: global directional cues do not improve spatial learning in an immersive virtual environment.","authors":"Ece Yüksel, Zachary Boogaart, Steven M Weisberg","doi":"10.1186/s41235-025-00654-1","DOIUrl":"10.1186/s41235-025-00654-1","url":null,"abstract":"<p><p>Spatial navigation relies on extracting environmental information to determine where to go. To support navigation behavior, navigational aids, such as maps, compasses, or global positioning systems (GPSs), offer access to easily extractible information, but do these aids enhance spatial memory? Here, we propose the hypothesis that navigation aids support navigation behavior when they are available but do not necessarily enhance navigation by improving the memory of a space. For example, a compass provides a global reference direction and bearing, showing where north is but may not result in a more accurate representation of an environment without the compass. We present two experiments evaluating whether people learned a large-scale, immersive virtual environment better when provided with a global reference direction. We explored whether participants used the provided reference direction to anchor their mental representation of the environment, i.e., whether their alignment of their mental map matched the cued direction. In the first (preregistered) experiment, we found no evidence of a difference in spatial memory performance between those with the compass available and those without (n = 54). The second experiment (n = 67) also revealed no difference in participants' environmental knowledge between a compass condition or a mountain range, which provided a global directional cue in a more salient and concrete form. The exploratory results revealed that the participants did not use either cue as a reference direction. Our results inform theories on how reference directions support navigation and, more broadly, how external cues are incorporated (or not) into cognitive representations.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"48"},"PeriodicalIF":3.1,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12331561/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144795828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-05DOI: 10.1186/s41235-025-00650-5
Amelia C Warden, Christopher D Wickens, Daniel Rehberg, Benjamin A Clegg, Francisco R Ortega
This work examines the influence of clutter when presenting information with a head-mounted display (HMD). We compare clutter costs when displays overlay a real-world scene to the costs of visual scanning required when displays are presented separately. Using an HMD in safety-critical environments reduces repetitive visual scanning and head movements that can become effortful with separate displays, such as a tablet. However, a trade-off occurs with overlay displays when low visibility information in the scene is needed or when perceiving text and symbols on the display requires high visual acuity. To examine this scan-clutter tradeoff, participants performed tasks requiring focused attention on either the scene or the display. The HMD either overlaid the critical aspects of the scene or was presented adjacent to the scene. The amount of clutter in both domains was quantified and manipulated. The HMD overlay and adjacent conditions showed similar performance for accuracy, but the overlay condition hindered tasks requiring focused attention on the scene. Perceiving clutter as perceptually closer was attributed to a biological tendency to prioritize information closer to the observer, which disproportionately harmed attention to scene information. Increasing clutter in both domains caused an increasing cost to both speed and accuracy. The results speak favorably to using an HMD, but signal the need to be cautious of the negative effects of clutter in either domain. These results highlight the importance of carefully designing HMDs to minimize clutter, especially when scene information is required.
{"title":"Clutter costs in head-mounted displays: a study examining trade-offs between overlay and adjacent presentation of information.","authors":"Amelia C Warden, Christopher D Wickens, Daniel Rehberg, Benjamin A Clegg, Francisco R Ortega","doi":"10.1186/s41235-025-00650-5","DOIUrl":"10.1186/s41235-025-00650-5","url":null,"abstract":"<p><p>This work examines the influence of clutter when presenting information with a head-mounted display (HMD). We compare clutter costs when displays overlay a real-world scene to the costs of visual scanning required when displays are presented separately. Using an HMD in safety-critical environments reduces repetitive visual scanning and head movements that can become effortful with separate displays, such as a tablet. However, a trade-off occurs with overlay displays when low visibility information in the scene is needed or when perceiving text and symbols on the display requires high visual acuity. To examine this scan-clutter tradeoff, participants performed tasks requiring focused attention on either the scene or the display. The HMD either overlaid the critical aspects of the scene or was presented adjacent to the scene. The amount of clutter in both domains was quantified and manipulated. The HMD overlay and adjacent conditions showed similar performance for accuracy, but the overlay condition hindered tasks requiring focused attention on the scene. Perceiving clutter as perceptually closer was attributed to a biological tendency to prioritize information closer to the observer, which disproportionately harmed attention to scene information. Increasing clutter in both domains caused an increasing cost to both speed and accuracy. The results speak favorably to using an HMD, but signal the need to be cautious of the negative effects of clutter in either domain. These results highlight the importance of carefully designing HMDs to minimize clutter, especially when scene information is required.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"47"},"PeriodicalIF":3.1,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12325806/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144790356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-03DOI: 10.1186/s41235-025-00656-z
Oktay Ülker, Daniel Bodemer
Remembering information about others is important but challenging in various social contexts. For instance, in long-term collaborative educational settings, students often need to choose peers for academic support. In different contexts, the selection process can depend on group awareness, i.e., the state of being informed about relevant social or cognitive characteristics of (potential) learning partners, like their participation or competence. However, selection can also depend on memory for different group awareness information on peers, which is not always accurate. An experimental study (N = 85) examined how type (participation vs. competence) and level (high vs. medium vs. low) of presented group awareness information influence learning partner selection in two phases (when information is present and when it is remembered). Higher levels were associated with higher selection probabilities, regardless of information type. Social comparison tendencies were associated with avoiding low participation partners. Moreover, we analyzed memory for group awareness information with multinomial processing tree model-based analyses: high and low participation levels were remembered better than medium levels, whereas high competence was remembered better than medium and low competence. Findings suggest that learners use different approach and avoidance strategies for choosing learning partners based on the type of given information.
{"title":"Selecting learning partners: memory for participation and competence.","authors":"Oktay Ülker, Daniel Bodemer","doi":"10.1186/s41235-025-00656-z","DOIUrl":"10.1186/s41235-025-00656-z","url":null,"abstract":"<p><p>Remembering information about others is important but challenging in various social contexts. For instance, in long-term collaborative educational settings, students often need to choose peers for academic support. In different contexts, the selection process can depend on group awareness, i.e., the state of being informed about relevant social or cognitive characteristics of (potential) learning partners, like their participation or competence. However, selection can also depend on memory for different group awareness information on peers, which is not always accurate. An experimental study (N = 85) examined how type (participation vs. competence) and level (high vs. medium vs. low) of presented group awareness information influence learning partner selection in two phases (when information is present and when it is remembered). Higher levels were associated with higher selection probabilities, regardless of information type. Social comparison tendencies were associated with avoiding low participation partners. Moreover, we analyzed memory for group awareness information with multinomial processing tree model-based analyses: high and low participation levels were remembered better than medium levels, whereas high competence was remembered better than medium and low competence. Findings suggest that learners use different approach and avoidance strategies for choosing learning partners based on the type of given information.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"46"},"PeriodicalIF":3.1,"publicationDate":"2025-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12318891/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144769182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}