Pub Date : 2024-04-19DOI: 10.1186/s41235-024-00551-z
Oliver Herbort, Philipp Raßbach, Wilfried Kunde
Scrolling is a widely used mean to interact with visual displays, usually to move content to a certain target location on the display. Understanding how user scroll might identify potentially suboptimal use and allows to infer users' intentions. In the present study, we examined where users click on a scrollbar depending on the intended scrolling action. In two online experiments, click positions were systematically adapted to the intended scrolling action. Click position selection could not be explained as strict optimization of the distance traveled with the cursor, memory load, or motor-cognitive factors. By contrast, for identical scrolling actions click positions strongly depended on the context and on previous scrolls. The behavior of our participants closely resembled behavior observed for manipulation of other physical devices and suggested a simple heuristic of movement planning. The results have implications for modeling human-computer interaction and may contribute to predicting user behavior.
{"title":"Where scrollbars are clicked, and why.","authors":"Oliver Herbort, Philipp Raßbach, Wilfried Kunde","doi":"10.1186/s41235-024-00551-z","DOIUrl":"10.1186/s41235-024-00551-z","url":null,"abstract":"<p><p>Scrolling is a widely used mean to interact with visual displays, usually to move content to a certain target location on the display. Understanding how user scroll might identify potentially suboptimal use and allows to infer users' intentions. In the present study, we examined where users click on a scrollbar depending on the intended scrolling action. In two online experiments, click positions were systematically adapted to the intended scrolling action. Click position selection could not be explained as strict optimization of the distance traveled with the cursor, memory load, or motor-cognitive factors. By contrast, for identical scrolling actions click positions strongly depended on the context and on previous scrolls. The behavior of our participants closely resembled behavior observed for manipulation of other physical devices and suggested a simple heuristic of movement planning. The results have implications for modeling human-computer interaction and may contribute to predicting user behavior.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"23"},"PeriodicalIF":4.1,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11026321/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140865390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-14DOI: 10.1186/s41235-024-00548-8
Colm P Murphy, Oliver R Runswick, N Viktor Gredin, David P Broadbent
In sport, coaches often explicitly provide athletes with stable contextual information related to opponent action preferences to enhance anticipation performance. This information can be dependent on, or independent of, dynamic contextual information that only emerges during the sequence of play (e.g. opponent positioning). The interdependency between contextual information sources, and the associated cognitive demands of integrating information sources during anticipation, has not yet been systematically examined. We used a temporal occlusion paradigm to alter the reliability of contextual and kinematic information during the early, mid- and final phases of a two-versus-two soccer anticipation task. A dual-task paradigm was incorporated to investigate the impact of task load on skilled soccer players' ability to integrate information and update their judgements in each phase. Across conditions, participants received no contextual information (control) or stable contextual information (opponent preferences) that was dependent on, or independent of, dynamic contextual information (opponent positioning). As predicted, participants used reliable contextual and kinematic information to enhance anticipation. Further exploratory analysis suggested that increased task load detrimentally affected anticipation accuracy but only when both reliable contextual and kinematic information were available for integration in the final phase. This effect was observed irrespective of whether the stable contextual information was dependent on, or independent of, dynamic contextual information. Findings suggest that updating anticipatory judgements in the final phase of a sequence of play based on the integration of reliable contextual and kinematic information requires cognitive resources.
{"title":"The effect of task load, information reliability and interdependency on anticipation performance.","authors":"Colm P Murphy, Oliver R Runswick, N Viktor Gredin, David P Broadbent","doi":"10.1186/s41235-024-00548-8","DOIUrl":"10.1186/s41235-024-00548-8","url":null,"abstract":"<p><p>In sport, coaches often explicitly provide athletes with stable contextual information related to opponent action preferences to enhance anticipation performance. This information can be dependent on, or independent of, dynamic contextual information that only emerges during the sequence of play (e.g. opponent positioning). The interdependency between contextual information sources, and the associated cognitive demands of integrating information sources during anticipation, has not yet been systematically examined. We used a temporal occlusion paradigm to alter the reliability of contextual and kinematic information during the early, mid- and final phases of a two-versus-two soccer anticipation task. A dual-task paradigm was incorporated to investigate the impact of task load on skilled soccer players' ability to integrate information and update their judgements in each phase. Across conditions, participants received no contextual information (control) or stable contextual information (opponent preferences) that was dependent on, or independent of, dynamic contextual information (opponent positioning). As predicted, participants used reliable contextual and kinematic information to enhance anticipation. Further exploratory analysis suggested that increased task load detrimentally affected anticipation accuracy but only when both reliable contextual and kinematic information were available for integration in the final phase. This effect was observed irrespective of whether the stable contextual information was dependent on, or independent of, dynamic contextual information. Findings suggest that updating anticipatory judgements in the final phase of a sequence of play based on the integration of reliable contextual and kinematic information requires cognitive resources.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"22"},"PeriodicalIF":4.1,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11016527/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140867431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-10DOI: 10.1186/s41235-024-00549-7
Reem Jalal Eddine, Claudio Mulatti, Francesco N Biondi
The use of partially-automated systems require drivers to supervise the system functioning and resume manual control whenever necessary. Yet literature on vehicle automation show that drivers may spend more time looking away from the road when the partially-automated system is operational. In this study we answer the question of whether this pattern is a manifestation of inattentional blindness or, more dangerously, it is also accompanied by a greater attentional processing of the driving scene. Participants drove a simulated vehicle in manual or partially-automated mode. Fixations were recorded by means of a head-mounted eye-tracker. A surprise two-alternative forced-choice recognition task was administered at the end of the data collection whereby participants were quizzed on the presence of roadside billboards that they encountered during the two drives. Data showed that participants were more likely to fixate and recognize billboards when the automated system was operational. Furthermore, whereas fixations toward billboards decreased toward the end of the automated drive, the performance in the recognition task did not suffer. Based on these findings, we hypothesize that the use of the partially-automated driving system may result in an increase in attention allocation toward peripheral objects in the road scene which is detrimental to the drivers' ability to supervise the automated system and resume manual control of the vehicle.
{"title":"On investigating drivers' attention allocation during partially-automated driving.","authors":"Reem Jalal Eddine, Claudio Mulatti, Francesco N Biondi","doi":"10.1186/s41235-024-00549-7","DOIUrl":"10.1186/s41235-024-00549-7","url":null,"abstract":"<p><p>The use of partially-automated systems require drivers to supervise the system functioning and resume manual control whenever necessary. Yet literature on vehicle automation show that drivers may spend more time looking away from the road when the partially-automated system is operational. In this study we answer the question of whether this pattern is a manifestation of inattentional blindness or, more dangerously, it is also accompanied by a greater attentional processing of the driving scene. Participants drove a simulated vehicle in manual or partially-automated mode. Fixations were recorded by means of a head-mounted eye-tracker. A surprise two-alternative forced-choice recognition task was administered at the end of the data collection whereby participants were quizzed on the presence of roadside billboards that they encountered during the two drives. Data showed that participants were more likely to fixate and recognize billboards when the automated system was operational. Furthermore, whereas fixations toward billboards decreased toward the end of the automated drive, the performance in the recognition task did not suffer. Based on these findings, we hypothesize that the use of the partially-automated driving system may result in an increase in attention allocation toward peripheral objects in the road scene which is detrimental to the drivers' ability to supervise the automated system and resume manual control of the vehicle.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"21"},"PeriodicalIF":4.1,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11006638/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140871987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-08DOI: 10.1186/s41235-024-00547-9
Paul Atchley, Hannah Pannell, Kaelyn Wofford, Michael Hopkins, Ruth Ann Atchley
In service of the goal of examining how cognitive science can facilitate human-computer interactions in complex systems, we explore how cognitive psychology research might help educators better utilize artificial intelligence and AI supported tools as facilitatory to learning, rather than see these emerging technologies as a threat. We also aim to provide historical perspective, both on how automation and technology has generated unnecessary apprehension over time, and how generative AI technologies such as ChatGPT are a product of the discipline of cognitive science. We introduce a model for how higher education instruction can adapt to the age of AI by fully capitalizing on the role that metacognition knowledge and skills play in determining learning effectiveness. Finally, we urge educators to consider how AI can be seen as a critical collaborator to be utilized in our efforts to educate around the critical workforce skills of effective communication and collaboration.
{"title":"Human and AI collaboration in the higher education environment: opportunities and concerns.","authors":"Paul Atchley, Hannah Pannell, Kaelyn Wofford, Michael Hopkins, Ruth Ann Atchley","doi":"10.1186/s41235-024-00547-9","DOIUrl":"10.1186/s41235-024-00547-9","url":null,"abstract":"<p><p>In service of the goal of examining how cognitive science can facilitate human-computer interactions in complex systems, we explore how cognitive psychology research might help educators better utilize artificial intelligence and AI supported tools as facilitatory to learning, rather than see these emerging technologies as a threat. We also aim to provide historical perspective, both on how automation and technology has generated unnecessary apprehension over time, and how generative AI technologies such as ChatGPT are a product of the discipline of cognitive science. We introduce a model for how higher education instruction can adapt to the age of AI by fully capitalizing on the role that metacognition knowledge and skills play in determining learning effectiveness. Finally, we urge educators to consider how AI can be seen as a critical collaborator to be utilized in our efforts to educate around the critical workforce skills of effective communication and collaboration.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"20"},"PeriodicalIF":4.1,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11001814/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140852325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-03DOI: 10.1186/s41235-024-00542-0
Heather Kleider-Offutt, Beth Stevens, Laura Mickes, Stewart Boogert
Artificial intelligence is already all around us, and its usage will only increase. Knowing its capabilities is critical. A facial recognition system (FRS) is a tool for law enforcement during suspect searches and when presenting photos to eyewitnesses for identification. However, there are no comparisons between eyewitness and FRS accuracy using video, so it is unknown whether FRS face matches are more accurate than eyewitness memory when identifying a perpetrator. Ours is the first application of artificial intelligence to an eyewitness experience, using a comparative psychology approach. As a first step to test system accuracy relative to eyewitness accuracy, participants and an open-source FRS (FaceNet) attempted perpetrator identification/match from lineup photos (target-present, target-absent) after exposure to real crime videos with varied clarity and perpetrator race. FRS used video probe images of each perpetrator to achieve similarity ratings for each corresponding lineup member. Using receiver operating characteristic analysis to measure discriminability, FRS performance was superior to eyewitness performance, regardless of video clarity or perpetrator race. Video clarity impacted participant performance, with the unclear videos yielding lower performance than the clear videos. Using confidence-accuracy characteristic analysis to measure reliability (i.e., the likelihood the identified suspect is the actual perpetrator), when the FRS identified faces with the highest similarity values, they were accurate. The results suggest FaceNet, or similarly performing systems, may supplement eyewitness memory for suspect searches and subsequent lineup construction and knowing the system's strengths and weaknesses is critical.
{"title":"Application of artificial intelligence to eyewitness identification.","authors":"Heather Kleider-Offutt, Beth Stevens, Laura Mickes, Stewart Boogert","doi":"10.1186/s41235-024-00542-0","DOIUrl":"10.1186/s41235-024-00542-0","url":null,"abstract":"<p><p>Artificial intelligence is already all around us, and its usage will only increase. Knowing its capabilities is critical. A facial recognition system (FRS) is a tool for law enforcement during suspect searches and when presenting photos to eyewitnesses for identification. However, there are no comparisons between eyewitness and FRS accuracy using video, so it is unknown whether FRS face matches are more accurate than eyewitness memory when identifying a perpetrator. Ours is the first application of artificial intelligence to an eyewitness experience, using a comparative psychology approach. As a first step to test system accuracy relative to eyewitness accuracy, participants and an open-source FRS (FaceNet) attempted perpetrator identification/match from lineup photos (target-present, target-absent) after exposure to real crime videos with varied clarity and perpetrator race. FRS used video probe images of each perpetrator to achieve similarity ratings for each corresponding lineup member. Using receiver operating characteristic analysis to measure discriminability, FRS performance was superior to eyewitness performance, regardless of video clarity or perpetrator race. Video clarity impacted participant performance, with the unclear videos yielding lower performance than the clear videos. Using confidence-accuracy characteristic analysis to measure reliability (i.e., the likelihood the identified suspect is the actual perpetrator), when the FRS identified faces with the highest similarity values, they were accurate. The results suggest FaceNet, or similarly performing systems, may supplement eyewitness memory for suspect searches and subsequent lineup construction and knowing the system's strengths and weaknesses is critical.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"19"},"PeriodicalIF":4.3,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10991253/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140858749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-27DOI: 10.1186/s41235-024-00537-x
Connor M Hults, Yifan Ding, Geneva G Xie, Rishi Raja, William Johnson, Alexis Lee, Daniel J Simons
People often fail to notice unexpected stimuli when their attention is directed elsewhere. Most studies of this "inattentional blindness" have been conducted using laboratory tasks with little connection to real-world performance. Medical case reports document examples of missed findings in radiographs and CT images, unintentionally retained guidewires following surgery, and additional conditions being overlooked after making initial diagnoses. These cases suggest that inattentional blindness might contribute to medical errors, but relatively few studies have directly examined inattentional blindness in realistic medical contexts. We review the existing literature, much of which focuses on the use of augmented reality aids or inspection of medical images. Although these studies suggest a role for inattentional blindness in errors, most of the studies do not provide clear evidence that these errors result from inattentional blindness as opposed to other mechanisms. We discuss the design, analysis, and reporting practices that can make the contributions of inattentional blindness unclear, and we describe guidelines for future research in medicine and similar contexts that could provide clearer evidence for the role of inattentional blindness.
{"title":"Inattentional blindness in medicine.","authors":"Connor M Hults, Yifan Ding, Geneva G Xie, Rishi Raja, William Johnson, Alexis Lee, Daniel J Simons","doi":"10.1186/s41235-024-00537-x","DOIUrl":"10.1186/s41235-024-00537-x","url":null,"abstract":"<p><p>People often fail to notice unexpected stimuli when their attention is directed elsewhere. Most studies of this \"inattentional blindness\" have been conducted using laboratory tasks with little connection to real-world performance. Medical case reports document examples of missed findings in radiographs and CT images, unintentionally retained guidewires following surgery, and additional conditions being overlooked after making initial diagnoses. These cases suggest that inattentional blindness might contribute to medical errors, but relatively few studies have directly examined inattentional blindness in realistic medical contexts. We review the existing literature, much of which focuses on the use of augmented reality aids or inspection of medical images. Although these studies suggest a role for inattentional blindness in errors, most of the studies do not provide clear evidence that these errors result from inattentional blindness as opposed to other mechanisms. We discuss the design, analysis, and reporting practices that can make the contributions of inattentional blindness unclear, and we describe guidelines for future research in medicine and similar contexts that could provide clearer evidence for the role of inattentional blindness.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"18"},"PeriodicalIF":4.1,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10973299/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140307381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-26DOI: 10.1186/s41235-024-00541-1
Chenxi Jiang, Zhenzhong Chen, Jeremy M Wolfe
Previous work has demonstrated similarities and differences between aerial and terrestrial image viewing. Aerial scene categorization, a pivotal visual processing task for gathering geoinformation, heavily depends on rotation-invariant information. Aerial image-centered research has revealed effects of low-level features on performance of various aerial image interpretation tasks. However, there are fewer studies of viewing behavior for aerial scene categorization and of higher-level factors that might influence that categorization. In this paper, experienced subjects' eye movements were recorded while they were asked to categorize aerial scenes. A typical viewing center bias was observed. Eye movement patterns varied among categories. We explored the relationship of nine image statistics to observers' eye movements. Results showed that if the images were less homogeneous, and/or if they contained fewer or no salient diagnostic objects, viewing behavior became more exploratory. Higher- and object-level image statistics were predictive at both the image and scene category levels. Scanpaths were generally organized and small differences in scanpath randomness could be roughly captured by critical object saliency. Participants tended to fixate on critical objects. Image statistics included in this study showed rotational invariance. The results supported our hypothesis that the availability of diagnostic objects strongly influences eye movements in this task. In addition, this study provides supporting evidence for Loschky et al.'s (Journal of Vision, 15(6), 11, 2015) speculation that aerial scenes are categorized on the basis of image parts and individual objects. The findings were discussed in relation to theories of scene perception and their implications for automation development.
{"title":"Toward viewing behavior for aerial scene categorization.","authors":"Chenxi Jiang, Zhenzhong Chen, Jeremy M Wolfe","doi":"10.1186/s41235-024-00541-1","DOIUrl":"10.1186/s41235-024-00541-1","url":null,"abstract":"<p><p>Previous work has demonstrated similarities and differences between aerial and terrestrial image viewing. Aerial scene categorization, a pivotal visual processing task for gathering geoinformation, heavily depends on rotation-invariant information. Aerial image-centered research has revealed effects of low-level features on performance of various aerial image interpretation tasks. However, there are fewer studies of viewing behavior for aerial scene categorization and of higher-level factors that might influence that categorization. In this paper, experienced subjects' eye movements were recorded while they were asked to categorize aerial scenes. A typical viewing center bias was observed. Eye movement patterns varied among categories. We explored the relationship of nine image statistics to observers' eye movements. Results showed that if the images were less homogeneous, and/or if they contained fewer or no salient diagnostic objects, viewing behavior became more exploratory. Higher- and object-level image statistics were predictive at both the image and scene category levels. Scanpaths were generally organized and small differences in scanpath randomness could be roughly captured by critical object saliency. Participants tended to fixate on critical objects. Image statistics included in this study showed rotational invariance. The results supported our hypothesis that the availability of diagnostic objects strongly influences eye movements in this task. In addition, this study provides supporting evidence for Loschky et al.'s (Journal of Vision, 15(6), 11, 2015) speculation that aerial scenes are categorized on the basis of image parts and individual objects. The findings were discussed in relation to theories of scene perception and their implications for automation development.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"17"},"PeriodicalIF":4.1,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10965882/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140294944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-19DOI: 10.1186/s41235-024-00543-z
Aoqi Li, Johan Hulleman, Jeremy M Wolfe
In any visual search task in the lab or in the world, observers will make errors. Those errors can be categorized as "deterministic": If you miss this target in this display once, you will definitely miss it again. Alternatively, errors can be "stochastic", occurring randomly with some probability from trial to trial. Researchers and practitioners have sought to reduce errors in visual search, but different types of errors might require different techniques for mitigation. To empirically categorize errors in a simple search task, our observers searched for the letter "T" among "L" distractors, with each display presented twice. When the letters were clearly visible (white letters on a gray background), the errors were almost completely stochastic (Exp 1). An error made on the first appearance of a display did not predict that an error would be made on the second appearance. When the visibility of the letters was manipulated (letters of different gray levels on a noisy background), the errors became a mix of stochastic and deterministic. Unsurprisingly, lower contrast targets produced more deterministic errors. (Exp 2). Using the stimuli of Exp 2, we tested whether errors could be reduced using cues that guided attention around the display but knew nothing about the content of that display (Exp3a, b). This had no effect, but cueing all item locations did succeed in reducing deterministic errors (Exp3c).
{"title":"Errors in visual search: Are they stochastic or deterministic?","authors":"Aoqi Li, Johan Hulleman, Jeremy M Wolfe","doi":"10.1186/s41235-024-00543-z","DOIUrl":"10.1186/s41235-024-00543-z","url":null,"abstract":"<p><p>In any visual search task in the lab or in the world, observers will make errors. Those errors can be categorized as \"deterministic\": If you miss this target in this display once, you will definitely miss it again. Alternatively, errors can be \"stochastic\", occurring randomly with some probability from trial to trial. Researchers and practitioners have sought to reduce errors in visual search, but different types of errors might require different techniques for mitigation. To empirically categorize errors in a simple search task, our observers searched for the letter \"T\" among \"L\" distractors, with each display presented twice. When the letters were clearly visible (white letters on a gray background), the errors were almost completely stochastic (Exp 1). An error made on the first appearance of a display did not predict that an error would be made on the second appearance. When the visibility of the letters was manipulated (letters of different gray levels on a noisy background), the errors became a mix of stochastic and deterministic. Unsurprisingly, lower contrast targets produced more deterministic errors. (Exp 2). Using the stimuli of Exp 2, we tested whether errors could be reduced using cues that guided attention around the display but knew nothing about the content of that display (Exp3a, b). This had no effect, but cueing all item locations did succeed in reducing deterministic errors (Exp3c).</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"15"},"PeriodicalIF":4.1,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951178/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140177158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-19DOI: 10.1186/s41235-024-00539-9
Brooklyn J Corbett, Jason M Tangen, Rachel A Searston, Matthew B Thompson
Expert fingerprint examiners demonstrate impressive feats of memory that may support their accuracy when making high-stakes identification decisions. Understanding the interplay between expertise and memory is therefore critical. Across two experiments, we tested fingerprint examiners and novices on their visual short-term memory for fingerprints. In Experiment 1, experts showed substantially higher memory performance compared to novices for fingerprints from their domain of expertise. In Experiment 2, we manipulated print distinctiveness and found that while both groups benefited from distinctive prints, experts still outperformed novices. This indicates that beyond stimulus qualities, expertise itself enhances short-term memory, likely through more effective organisational processing and sensitivity to meaningful patterns. Taken together, these findings shed light on the cognitive mechanisms that may explain fingerprint examiners' superior memory performance within their domain of expertise. They further suggest that training to improve memory for diverse fingerprints could practically boost examiner performance. Given the high-stakes nature of forensic identification, characterising psychological processes like memory that potentially contribute to examiner accuracy has important theoretical and practical implications.
{"title":"The effect of fingerprint expertise on visual short-term memory.","authors":"Brooklyn J Corbett, Jason M Tangen, Rachel A Searston, Matthew B Thompson","doi":"10.1186/s41235-024-00539-9","DOIUrl":"10.1186/s41235-024-00539-9","url":null,"abstract":"<p><p>Expert fingerprint examiners demonstrate impressive feats of memory that may support their accuracy when making high-stakes identification decisions. Understanding the interplay between expertise and memory is therefore critical. Across two experiments, we tested fingerprint examiners and novices on their visual short-term memory for fingerprints. In Experiment 1, experts showed substantially higher memory performance compared to novices for fingerprints from their domain of expertise. In Experiment 2, we manipulated print distinctiveness and found that while both groups benefited from distinctive prints, experts still outperformed novices. This indicates that beyond stimulus qualities, expertise itself enhances short-term memory, likely through more effective organisational processing and sensitivity to meaningful patterns. Taken together, these findings shed light on the cognitive mechanisms that may explain fingerprint examiners' superior memory performance within their domain of expertise. They further suggest that training to improve memory for diverse fingerprints could practically boost examiner performance. Given the high-stakes nature of forensic identification, characterising psychological processes like memory that potentially contribute to examiner accuracy has important theoretical and practical implications.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"14"},"PeriodicalIF":4.1,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951190/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140177160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-19DOI: 10.1186/s41235-024-00545-x
Alexis Topete, Chuanxiuyue He, John Protzko, Jonathan Schooler, Mary Hegarty
Given how commonly GPS is now used in everyday navigation, it is surprising how little research has been dedicated to investigating variations in its use and how such variations may relate to navigation ability. The present study investigated general GPS dependence, how people report using GPS in various navigational scenarios, and the relationship between these measures and spatial abilities (assessed by self-report measures and the ability to learn the layout of a novel environment). GPS dependence is an individual's perceived need to use GPS in navigation, and GPS usage is the frequency with which they report using different functions of GPS. The study also assessed whether people modulate reported use of GPS as a function of their familiarity with the location in which they are navigating. In 249 participants over two preregistered studies, reported GPS dependence was negatively correlated with objective navigation performance and self-reported sense of direction, and positively correlated with spatial anxiety. Greater reported use of GPS for turn-by-turn directions was associated with a poorer sense of direction and higher spatial anxiety. People reported using GPS most frequently for time and traffic estimation, regardless of ability. Finally, people reported using GPS less, regardless of ability, when they were more familiar with an environment. Collectively these findings suggest that people moderate their use of GPS, depending on their knowledge, ability, and confidence in their own abilities, and often report using GPS to augment rather than replace spatial environmental knowledge.
{"title":"How is GPS used? Understanding navigation system use and its relation to spatial ability.","authors":"Alexis Topete, Chuanxiuyue He, John Protzko, Jonathan Schooler, Mary Hegarty","doi":"10.1186/s41235-024-00545-x","DOIUrl":"10.1186/s41235-024-00545-x","url":null,"abstract":"<p><p>Given how commonly GPS is now used in everyday navigation, it is surprising how little research has been dedicated to investigating variations in its use and how such variations may relate to navigation ability. The present study investigated general GPS dependence, how people report using GPS in various navigational scenarios, and the relationship between these measures and spatial abilities (assessed by self-report measures and the ability to learn the layout of a novel environment). GPS dependence is an individual's perceived need to use GPS in navigation, and GPS usage is the frequency with which they report using different functions of GPS. The study also assessed whether people modulate reported use of GPS as a function of their familiarity with the location in which they are navigating. In 249 participants over two preregistered studies, reported GPS dependence was negatively correlated with objective navigation performance and self-reported sense of direction, and positively correlated with spatial anxiety. Greater reported use of GPS for turn-by-turn directions was associated with a poorer sense of direction and higher spatial anxiety. People reported using GPS most frequently for time and traffic estimation, regardless of ability. Finally, people reported using GPS less, regardless of ability, when they were more familiar with an environment. Collectively these findings suggest that people moderate their use of GPS, depending on their knowledge, ability, and confidence in their own abilities, and often report using GPS to augment rather than replace spatial environmental knowledge.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"16"},"PeriodicalIF":4.1,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951145/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140177159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}