Pub Date : 2025-08-15DOI: 10.1186/s41235-025-00663-0
Jieun Cho, Jeunghwan Choi, Cheongil Kim, Jeong Hyeon Park, Sang Chul Chong
In today's digital world, understanding refresh-rate implications on visual perception and energy consumption is crucial. While high refresh rates enhance motion perception and user experience, they also increase energy usage, prompting the need for adaptive solutions like variable refresh rates. This study examines whether users notice or are affected by reduced refresh rates in task-irrelevant areas and examine whether variable refresh rates compromise a satisfactory display experience. Most participants failed to detect decreases in refresh rate in their peripheral view, and their task performance of the main task, which required sustained attention, remained unaffected. However, when informed of the possible change in the periphery, detection of it improved. In addition, during out-of-the-zone states, people with expectations about the phenomenon may be more likely to falsely report the change in the display. The findings suggest that centrally focused attention limits awareness of peripheral refresh-rate changes, supporting the potential of multi-refresh-rate strategies to optimize energy efficiency without compromising user experience.
{"title":"Effect of multi-refresh-rate method on user experience: sustained attention and inattentional blindness.","authors":"Jieun Cho, Jeunghwan Choi, Cheongil Kim, Jeong Hyeon Park, Sang Chul Chong","doi":"10.1186/s41235-025-00663-0","DOIUrl":"10.1186/s41235-025-00663-0","url":null,"abstract":"<p><p>In today's digital world, understanding refresh-rate implications on visual perception and energy consumption is crucial. While high refresh rates enhance motion perception and user experience, they also increase energy usage, prompting the need for adaptive solutions like variable refresh rates. This study examines whether users notice or are affected by reduced refresh rates in task-irrelevant areas and examine whether variable refresh rates compromise a satisfactory display experience. Most participants failed to detect decreases in refresh rate in their peripheral view, and their task performance of the main task, which required sustained attention, remained unaffected. However, when informed of the possible change in the periphery, detection of it improved. In addition, during out-of-the-zone states, people with expectations about the phenomenon may be more likely to falsely report the change in the display. The findings suggest that centrally focused attention limits awareness of peripheral refresh-rate changes, supporting the potential of multi-refresh-rate strategies to optimize energy efficiency without compromising user experience.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"50"},"PeriodicalIF":3.1,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12356812/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144856742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-15DOI: 10.1186/s41235-025-00653-2
Krystina Diaz, Mark W Becker, Chad Peltier, Jeffrey B Bolkhovsky
Visual search performance is a critical factor in many high-stakes duties, warranting the need for strategies to enhance target detection accuracy. Research using rapid serial visual presentation (RSVP) of stimuli shows that observers can detect categorically defined, pre-specified targets even when the presentation rate is rapid, suggesting RSVP as a viable strategy. To investigate how and how well RSVP can improve target detection in complex search arrays, five experiments were conducted to compare search performance between Full-Image search conditions and various RSVP-based conditions. Stimulus presentation time/total search time was the same across conditions. Experiment 1 demonstrated the utility of RSVP to enhance target identification in simple arrays (i.e., Landolt Cs). Experiment 2 involved more complex scenes and target-present/-absent judgments. Results showed that RSVP increased target detections due to both a liberal change in criterion and an increase in sensitivity. Experiment 3 provides some evidence against the reduction in peripheral clutter as the primary contributor to RSVP performance increases. Experiments 4 and 5 prompted and limited eye movements, respectively, to distinguish the role of eye movements in RSVP-based search. These two latter experiments imply that lower target detection performance under time constraints in whole image search conditions is attributable to time-wasting, irrelevant and inefficient eye movements. These experiments suggest that RSVP advantage occurs because the method maximizes time for inspecting and processing each search image/segment. Real-world visual search tasks may benefit from segmenting the search display and presenting images in an RSVP stream.
{"title":"Presenting segmented images in a rapid serial visual presentation stream improves search accuracy.","authors":"Krystina Diaz, Mark W Becker, Chad Peltier, Jeffrey B Bolkhovsky","doi":"10.1186/s41235-025-00653-2","DOIUrl":"10.1186/s41235-025-00653-2","url":null,"abstract":"<p><p>Visual search performance is a critical factor in many high-stakes duties, warranting the need for strategies to enhance target detection accuracy. Research using rapid serial visual presentation (RSVP) of stimuli shows that observers can detect categorically defined, pre-specified targets even when the presentation rate is rapid, suggesting RSVP as a viable strategy. To investigate how and how well RSVP can improve target detection in complex search arrays, five experiments were conducted to compare search performance between Full-Image search conditions and various RSVP-based conditions. Stimulus presentation time/total search time was the same across conditions. Experiment 1 demonstrated the utility of RSVP to enhance target identification in simple arrays (i.e., Landolt Cs). Experiment 2 involved more complex scenes and target-present/-absent judgments. Results showed that RSVP increased target detections due to both a liberal change in criterion and an increase in sensitivity. Experiment 3 provides some evidence against the reduction in peripheral clutter as the primary contributor to RSVP performance increases. Experiments 4 and 5 prompted and limited eye movements, respectively, to distinguish the role of eye movements in RSVP-based search. These two latter experiments imply that lower target detection performance under time constraints in whole image search conditions is attributable to time-wasting, irrelevant and inefficient eye movements. These experiments suggest that RSVP advantage occurs because the method maximizes time for inspecting and processing each search image/segment. Real-world visual search tasks may benefit from segmenting the search display and presenting images in an RSVP stream.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"49"},"PeriodicalIF":3.1,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12356790/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144856743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-07DOI: 10.1186/s41235-025-00654-1
Ece Yüksel, Zachary Boogaart, Steven M Weisberg
Spatial navigation relies on extracting environmental information to determine where to go. To support navigation behavior, navigational aids, such as maps, compasses, or global positioning systems (GPSs), offer access to easily extractible information, but do these aids enhance spatial memory? Here, we propose the hypothesis that navigation aids support navigation behavior when they are available but do not necessarily enhance navigation by improving the memory of a space. For example, a compass provides a global reference direction and bearing, showing where north is but may not result in a more accurate representation of an environment without the compass. We present two experiments evaluating whether people learned a large-scale, immersive virtual environment better when provided with a global reference direction. We explored whether participants used the provided reference direction to anchor their mental representation of the environment, i.e., whether their alignment of their mental map matched the cued direction. In the first (preregistered) experiment, we found no evidence of a difference in spatial memory performance between those with the compass available and those without (n = 54). The second experiment (n = 67) also revealed no difference in participants' environmental knowledge between a compass condition or a mountain range, which provided a global directional cue in a more salient and concrete form. The exploratory results revealed that the participants did not use either cue as a reference direction. Our results inform theories on how reference directions support navigation and, more broadly, how external cues are incorporated (or not) into cognitive representations.
{"title":"This is not the way: global directional cues do not improve spatial learning in an immersive virtual environment.","authors":"Ece Yüksel, Zachary Boogaart, Steven M Weisberg","doi":"10.1186/s41235-025-00654-1","DOIUrl":"10.1186/s41235-025-00654-1","url":null,"abstract":"<p><p>Spatial navigation relies on extracting environmental information to determine where to go. To support navigation behavior, navigational aids, such as maps, compasses, or global positioning systems (GPSs), offer access to easily extractible information, but do these aids enhance spatial memory? Here, we propose the hypothesis that navigation aids support navigation behavior when they are available but do not necessarily enhance navigation by improving the memory of a space. For example, a compass provides a global reference direction and bearing, showing where north is but may not result in a more accurate representation of an environment without the compass. We present two experiments evaluating whether people learned a large-scale, immersive virtual environment better when provided with a global reference direction. We explored whether participants used the provided reference direction to anchor their mental representation of the environment, i.e., whether their alignment of their mental map matched the cued direction. In the first (preregistered) experiment, we found no evidence of a difference in spatial memory performance between those with the compass available and those without (n = 54). The second experiment (n = 67) also revealed no difference in participants' environmental knowledge between a compass condition or a mountain range, which provided a global directional cue in a more salient and concrete form. The exploratory results revealed that the participants did not use either cue as a reference direction. Our results inform theories on how reference directions support navigation and, more broadly, how external cues are incorporated (or not) into cognitive representations.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"48"},"PeriodicalIF":3.1,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12331561/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144795828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-05DOI: 10.1186/s41235-025-00650-5
Amelia C Warden, Christopher D Wickens, Daniel Rehberg, Benjamin A Clegg, Francisco R Ortega
This work examines the influence of clutter when presenting information with a head-mounted display (HMD). We compare clutter costs when displays overlay a real-world scene to the costs of visual scanning required when displays are presented separately. Using an HMD in safety-critical environments reduces repetitive visual scanning and head movements that can become effortful with separate displays, such as a tablet. However, a trade-off occurs with overlay displays when low visibility information in the scene is needed or when perceiving text and symbols on the display requires high visual acuity. To examine this scan-clutter tradeoff, participants performed tasks requiring focused attention on either the scene or the display. The HMD either overlaid the critical aspects of the scene or was presented adjacent to the scene. The amount of clutter in both domains was quantified and manipulated. The HMD overlay and adjacent conditions showed similar performance for accuracy, but the overlay condition hindered tasks requiring focused attention on the scene. Perceiving clutter as perceptually closer was attributed to a biological tendency to prioritize information closer to the observer, which disproportionately harmed attention to scene information. Increasing clutter in both domains caused an increasing cost to both speed and accuracy. The results speak favorably to using an HMD, but signal the need to be cautious of the negative effects of clutter in either domain. These results highlight the importance of carefully designing HMDs to minimize clutter, especially when scene information is required.
{"title":"Clutter costs in head-mounted displays: a study examining trade-offs between overlay and adjacent presentation of information.","authors":"Amelia C Warden, Christopher D Wickens, Daniel Rehberg, Benjamin A Clegg, Francisco R Ortega","doi":"10.1186/s41235-025-00650-5","DOIUrl":"10.1186/s41235-025-00650-5","url":null,"abstract":"<p><p>This work examines the influence of clutter when presenting information with a head-mounted display (HMD). We compare clutter costs when displays overlay a real-world scene to the costs of visual scanning required when displays are presented separately. Using an HMD in safety-critical environments reduces repetitive visual scanning and head movements that can become effortful with separate displays, such as a tablet. However, a trade-off occurs with overlay displays when low visibility information in the scene is needed or when perceiving text and symbols on the display requires high visual acuity. To examine this scan-clutter tradeoff, participants performed tasks requiring focused attention on either the scene or the display. The HMD either overlaid the critical aspects of the scene or was presented adjacent to the scene. The amount of clutter in both domains was quantified and manipulated. The HMD overlay and adjacent conditions showed similar performance for accuracy, but the overlay condition hindered tasks requiring focused attention on the scene. Perceiving clutter as perceptually closer was attributed to a biological tendency to prioritize information closer to the observer, which disproportionately harmed attention to scene information. Increasing clutter in both domains caused an increasing cost to both speed and accuracy. The results speak favorably to using an HMD, but signal the need to be cautious of the negative effects of clutter in either domain. These results highlight the importance of carefully designing HMDs to minimize clutter, especially when scene information is required.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"47"},"PeriodicalIF":3.1,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12325806/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144790356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-03DOI: 10.1186/s41235-025-00656-z
Oktay Ülker, Daniel Bodemer
Remembering information about others is important but challenging in various social contexts. For instance, in long-term collaborative educational settings, students often need to choose peers for academic support. In different contexts, the selection process can depend on group awareness, i.e., the state of being informed about relevant social or cognitive characteristics of (potential) learning partners, like their participation or competence. However, selection can also depend on memory for different group awareness information on peers, which is not always accurate. An experimental study (N = 85) examined how type (participation vs. competence) and level (high vs. medium vs. low) of presented group awareness information influence learning partner selection in two phases (when information is present and when it is remembered). Higher levels were associated with higher selection probabilities, regardless of information type. Social comparison tendencies were associated with avoiding low participation partners. Moreover, we analyzed memory for group awareness information with multinomial processing tree model-based analyses: high and low participation levels were remembered better than medium levels, whereas high competence was remembered better than medium and low competence. Findings suggest that learners use different approach and avoidance strategies for choosing learning partners based on the type of given information.
{"title":"Selecting learning partners: memory for participation and competence.","authors":"Oktay Ülker, Daniel Bodemer","doi":"10.1186/s41235-025-00656-z","DOIUrl":"10.1186/s41235-025-00656-z","url":null,"abstract":"<p><p>Remembering information about others is important but challenging in various social contexts. For instance, in long-term collaborative educational settings, students often need to choose peers for academic support. In different contexts, the selection process can depend on group awareness, i.e., the state of being informed about relevant social or cognitive characteristics of (potential) learning partners, like their participation or competence. However, selection can also depend on memory for different group awareness information on peers, which is not always accurate. An experimental study (N = 85) examined how type (participation vs. competence) and level (high vs. medium vs. low) of presented group awareness information influence learning partner selection in two phases (when information is present and when it is remembered). Higher levels were associated with higher selection probabilities, regardless of information type. Social comparison tendencies were associated with avoiding low participation partners. Moreover, we analyzed memory for group awareness information with multinomial processing tree model-based analyses: high and low participation levels were remembered better than medium levels, whereas high competence was remembered better than medium and low competence. Findings suggest that learners use different approach and avoidance strategies for choosing learning partners based on the type of given information.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"46"},"PeriodicalIF":3.1,"publicationDate":"2025-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12318891/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144769182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1186/s41235-025-00661-2
Mario Dalmaso, Anna Lorenzoni, Giovanni Galfano, Marta Riva, Luigi Castelli
Social attention can be defined as the tendency to orient attentional resources in response to spatial cues provided by others, such as their gaze or head direction. This mechanism is essential for navigating real-world environments, where rapidly and accurately interpreting others' behaviour is often critical. Regarding head-driven orienting, research studies suggest that social attention can be enhanced when a front-facing head cue establishes eye contact (vs. no eye contact) with the observer, but also when the head cue is viewed from behind (vs. from the front), and hence, eye contact cannot be established. Across three experiments, we directly compared these two scenarios-which are highly common in everyday life-by presenting a central head cue showing either the front (establishing eye contact) or back, followed by a turn to the left or right. In Experiments 1 and 2, participants were required to manually respond to peripheral targets while ignoring the head cue, whereas in Experiment 3, oculomotor responses were recorded. Although the initial view of the head did not affect manual responses, eye movement data revealed enhanced social attention when the head was initially viewed from the front. These results suggest that eye movements provide a sensitive measure for detecting potential social modulations of attention. Moreover, eye contact confirms here its role as a powerful social signal for humans, capable of boosting overt orienting responses. Future research should explore these effects in more dynamic and ecologically valid settings, such as real social interactions.
{"title":"Uncovering everyday attention in the lab: front-viewed heads boost overt social orienting.","authors":"Mario Dalmaso, Anna Lorenzoni, Giovanni Galfano, Marta Riva, Luigi Castelli","doi":"10.1186/s41235-025-00661-2","DOIUrl":"10.1186/s41235-025-00661-2","url":null,"abstract":"<p><p>Social attention can be defined as the tendency to orient attentional resources in response to spatial cues provided by others, such as their gaze or head direction. This mechanism is essential for navigating real-world environments, where rapidly and accurately interpreting others' behaviour is often critical. Regarding head-driven orienting, research studies suggest that social attention can be enhanced when a front-facing head cue establishes eye contact (vs. no eye contact) with the observer, but also when the head cue is viewed from behind (vs. from the front), and hence, eye contact cannot be established. Across three experiments, we directly compared these two scenarios-which are highly common in everyday life-by presenting a central head cue showing either the front (establishing eye contact) or back, followed by a turn to the left or right. In Experiments 1 and 2, participants were required to manually respond to peripheral targets while ignoring the head cue, whereas in Experiment 3, oculomotor responses were recorded. Although the initial view of the head did not affect manual responses, eye movement data revealed enhanced social attention when the head was initially viewed from the front. These results suggest that eye movements provide a sensitive measure for detecting potential social modulations of attention. Moreover, eye contact confirms here its role as a powerful social signal for humans, capable of boosting overt orienting responses. Future research should explore these effects in more dynamic and ecologically valid settings, such as real social interactions.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"45"},"PeriodicalIF":3.1,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12316623/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144765610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-30DOI: 10.1186/s41235-025-00659-w
Hanshu Zhang, Ran Zhou, Cheng-You Cheng, Sheng-Hsu Huang, Ming-Hui Cheng, Cheng-Ta Yang
Although it is commonly believed that automation aids human decision-making, conflicting evidence raises questions about whether individuals would gain greater advantages from automation in difficult tasks. Our study examines the combined influence of task difficulty and automation reliability on aided decision-making. We assessed decision efficiency by employing the single-target self-terminating (STST) capacity coefficient in Systems Factorial Technology, estimating the ratio of performance with aided information to that without it. Participants were instructed to perform a shape categorization task, wherein they assessed whether the presented stimulus belonged to one category or another. In Experiment 1, three automation reliability conditions (high reliability, low reliability, and unaided) were tested in separate blocks. Our results indicated that, in general, participants exhibited unlimited capacity when provided with valid automated cues, implying that the decision efficiency was unaltered by automated assistance. Despite the failure to gain extra efficiency, the benefits of automated aids in decision-making for difficult tasks were evident. In Experiment 2, various types of automation reliability were randomly intermixed. In this scenario, the impact of automation reliability on participants' performance diminished; however, the significance of information accuracy increased. Our study illustrates how the presentation of automation, its reliability, and task difficulty interactively influence participants' processing of automated information for decision-making. Our study may improve processing efficiency in automated systems, hence facilitating superior interface design and automation execution.
{"title":"Decision-making efficiency with aided information: the impact of automation reliability and task difficulty.","authors":"Hanshu Zhang, Ran Zhou, Cheng-You Cheng, Sheng-Hsu Huang, Ming-Hui Cheng, Cheng-Ta Yang","doi":"10.1186/s41235-025-00659-w","DOIUrl":"10.1186/s41235-025-00659-w","url":null,"abstract":"<p><p>Although it is commonly believed that automation aids human decision-making, conflicting evidence raises questions about whether individuals would gain greater advantages from automation in difficult tasks. Our study examines the combined influence of task difficulty and automation reliability on aided decision-making. We assessed decision efficiency by employing the single-target self-terminating (STST) capacity coefficient in Systems Factorial Technology, estimating the ratio of performance with aided information to that without it. Participants were instructed to perform a shape categorization task, wherein they assessed whether the presented stimulus belonged to one category or another. In Experiment 1, three automation reliability conditions (high reliability, low reliability, and unaided) were tested in separate blocks. Our results indicated that, in general, participants exhibited unlimited capacity when provided with valid automated cues, implying that the decision efficiency was unaltered by automated assistance. Despite the failure to gain extra efficiency, the benefits of automated aids in decision-making for difficult tasks were evident. In Experiment 2, various types of automation reliability were randomly intermixed. In this scenario, the impact of automation reliability on participants' performance diminished; however, the significance of information accuracy increased. Our study illustrates how the presentation of automation, its reliability, and task difficulty interactively influence participants' processing of automated information for decision-making. Our study may improve processing efficiency in automated systems, hence facilitating superior interface design and automation execution.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"44"},"PeriodicalIF":3.1,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12311095/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144745447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-26DOI: 10.1186/s41235-025-00658-x
Xiang Che, Jiayue Ma, Yu Zhang, Chen Zhou, Qian Zhou, Kun Zhang, Jijun Lan, Qi Hui, Jie Li
Classical two-dimensional multiple object tracking (2D-MOT) measures the cognitive ability to track multiple moving elements in real-life-like scenarios. Stereo-three-dimensional MOT (S-3D-MOT), a more ecologically valid form of 2D-MOT, shows better tracking performance in soccer players. Its unique feature is the additional binocular and monocular 3D cues compared to 2D-MOT, but their individual contributions to MOT performance are unclear. To fill this research gap, the current study introduced a three-dimensional MOT task on a flat screen (F-3D-MOT) to distinguish the roles of binocular and monocular 3D cues. F-3D-MOT provides additional monocular 3D cues compared to classical 2D-MOT but lacks binocular 3D cues compared to S-3D-MOT. Moreover, whether the effects of these 3D cues on MOT performance vary between soccer players and non-athletes remains unclear. Therefore, both groups were recruited for this study. The results showed that soccer players performed significantly better than non-athletes specifically in S-3D-MOT, indicating their enhanced sensitivity to binocular 3D cues. In contrast, neither monocular cues (F-3D-MOT) nor 2D displays led to significant differences between the two groups.
{"title":"Binocular vs. monocular 3D cues in multiple object tracking: expertise differences between soccer players and non-athletes.","authors":"Xiang Che, Jiayue Ma, Yu Zhang, Chen Zhou, Qian Zhou, Kun Zhang, Jijun Lan, Qi Hui, Jie Li","doi":"10.1186/s41235-025-00658-x","DOIUrl":"10.1186/s41235-025-00658-x","url":null,"abstract":"<p><p>Classical two-dimensional multiple object tracking (2D-MOT) measures the cognitive ability to track multiple moving elements in real-life-like scenarios. Stereo-three-dimensional MOT (S-3D-MOT), a more ecologically valid form of 2D-MOT, shows better tracking performance in soccer players. Its unique feature is the additional binocular and monocular 3D cues compared to 2D-MOT, but their individual contributions to MOT performance are unclear. To fill this research gap, the current study introduced a three-dimensional MOT task on a flat screen (F-3D-MOT) to distinguish the roles of binocular and monocular 3D cues. F-3D-MOT provides additional monocular 3D cues compared to classical 2D-MOT but lacks binocular 3D cues compared to S-3D-MOT. Moreover, whether the effects of these 3D cues on MOT performance vary between soccer players and non-athletes remains unclear. Therefore, both groups were recruited for this study. The results showed that soccer players performed significantly better than non-athletes specifically in S-3D-MOT, indicating their enhanced sensitivity to binocular 3D cues. In contrast, neither monocular cues (F-3D-MOT) nor 2D displays led to significant differences between the two groups.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"43"},"PeriodicalIF":3.1,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12297074/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144733920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-15DOI: 10.1186/s41235-025-00647-0
Hagit Magen, Michal Tomer-Offen
In many circumstances in everyday life, individuals offload information to external stores (e.g., shopping lists) to compensate for limitations in internal memory. When saving information externally, individuals tend to refrain from actively encoding an additional internal copy of the information, leading to a weakening of its internal trace. This study examined whether the loss of the internal trace due to offloading is limited to item-specific information (e.g., apples, milk) or extends to relational information as well (e.g., fruit, dairy products). In the first two blocks of each of two experiments, participants learned lists of 20 unrelated words, which they could save externally for use during a subsequent memory test. In the third block, participants learned a categorized list consisting of 6 exemplars from 8 semantic categories. While participants could save the list externally, they were prevented access to the list at test. Half of the participants were informed that the list would be unavailable at test, thus relying on internal memory, whereas the remaining participants trusted the list availability. Reliance on the external store led to a reduction in the internal trace of the offloaded information. Notably, saving the information externally resulted in decreased internal memory for both item-specific and relational information. This study indicates that internal memory for relational information does not effectively support the retrieval of information from external stores, and suggests that optimal organization of external stores should highlight relational information.
{"title":"Reduced relational and item-specific processing in cognitive offloading.","authors":"Hagit Magen, Michal Tomer-Offen","doi":"10.1186/s41235-025-00647-0","DOIUrl":"10.1186/s41235-025-00647-0","url":null,"abstract":"<p><p>In many circumstances in everyday life, individuals offload information to external stores (e.g., shopping lists) to compensate for limitations in internal memory. When saving information externally, individuals tend to refrain from actively encoding an additional internal copy of the information, leading to a weakening of its internal trace. This study examined whether the loss of the internal trace due to offloading is limited to item-specific information (e.g., apples, milk) or extends to relational information as well (e.g., fruit, dairy products). In the first two blocks of each of two experiments, participants learned lists of 20 unrelated words, which they could save externally for use during a subsequent memory test. In the third block, participants learned a categorized list consisting of 6 exemplars from 8 semantic categories. While participants could save the list externally, they were prevented access to the list at test. Half of the participants were informed that the list would be unavailable at test, thus relying on internal memory, whereas the remaining participants trusted the list availability. Reliance on the external store led to a reduction in the internal trace of the offloaded information. Notably, saving the information externally resulted in decreased internal memory for both item-specific and relational information. This study indicates that internal memory for relational information does not effectively support the retrieval of information from external stores, and suggests that optimal organization of external stores should highlight relational information.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"41"},"PeriodicalIF":3.4,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12260133/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144638327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-15DOI: 10.1186/s41235-025-00651-4
Mara Stockner, Giuliana Mazzoni, Francesco Ianì
"Motor fluency" refers to the ease with which an action can be performed and several studies have shown how it can modulate various cognitive processes, such as memory and decision making. To investigate these implications of motor fluency, typing-based paradigms have been proven to be useful. In this literature, based on pioneering works that analysed inter-keystroke intervals (IKIs, the time that elapses between two keystrokes), several studies have assumed that letter dyads typed with different hands are more fluent than dyads typed with the same hand. However, to date, there is no literature analysing subjectively perceived typing fluency, i.e. the feeling of fluency experienced by typists. Moreover, this classical conceptualization has not been updated in the last decade. This raises the question of whether this distinction is also reflected in the subjective feeling of fluency, and whether it is still valid in today's generation of everyday typists. Thus, we investigated the validity of dyad fluency classification by measuring both objective and subjective typing fluency in two samples of university students. The objective measure included both the response times required to type the entire dyads (Experiment 1) as well as reaction times from stimulus presentation to first keypress alongside IKIs (Experiment 2). Overall, we found consistent results that both objective and subjective measures follow the opposite trend compared to classical assumptions: same-hand dyads are (perceived) more fluent than different-hands dyads. Our results have important methodological implications for future research on typing-related motor fluency.
{"title":"Are fluent letter dyads really fluent? An update on objective and subjective motor fluency in an Italian student population.","authors":"Mara Stockner, Giuliana Mazzoni, Francesco Ianì","doi":"10.1186/s41235-025-00651-4","DOIUrl":"10.1186/s41235-025-00651-4","url":null,"abstract":"<p><p>\"Motor fluency\" refers to the ease with which an action can be performed and several studies have shown how it can modulate various cognitive processes, such as memory and decision making. To investigate these implications of motor fluency, typing-based paradigms have been proven to be useful. In this literature, based on pioneering works that analysed inter-keystroke intervals (IKIs, the time that elapses between two keystrokes), several studies have assumed that letter dyads typed with different hands are more fluent than dyads typed with the same hand. However, to date, there is no literature analysing subjectively perceived typing fluency, i.e. the feeling of fluency experienced by typists. Moreover, this classical conceptualization has not been updated in the last decade. This raises the question of whether this distinction is also reflected in the subjective feeling of fluency, and whether it is still valid in today's generation of everyday typists. Thus, we investigated the validity of dyad fluency classification by measuring both objective and subjective typing fluency in two samples of university students. The objective measure included both the response times required to type the entire dyads (Experiment 1) as well as reaction times from stimulus presentation to first keypress alongside IKIs (Experiment 2). Overall, we found consistent results that both objective and subjective measures follow the opposite trend compared to classical assumptions: same-hand dyads are (perceived) more fluent than different-hands dyads. Our results have important methodological implications for future research on typing-related motor fluency.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"42"},"PeriodicalIF":3.4,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12259504/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144638326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}