Pub Date : 2024-03-19DOI: 10.1186/s41235-024-00545-x
Alexis Topete, Chuanxiuyue He, John Protzko, Jonathan Schooler, Mary Hegarty
Given how commonly GPS is now used in everyday navigation, it is surprising how little research has been dedicated to investigating variations in its use and how such variations may relate to navigation ability. The present study investigated general GPS dependence, how people report using GPS in various navigational scenarios, and the relationship between these measures and spatial abilities (assessed by self-report measures and the ability to learn the layout of a novel environment). GPS dependence is an individual's perceived need to use GPS in navigation, and GPS usage is the frequency with which they report using different functions of GPS. The study also assessed whether people modulate reported use of GPS as a function of their familiarity with the location in which they are navigating. In 249 participants over two preregistered studies, reported GPS dependence was negatively correlated with objective navigation performance and self-reported sense of direction, and positively correlated with spatial anxiety. Greater reported use of GPS for turn-by-turn directions was associated with a poorer sense of direction and higher spatial anxiety. People reported using GPS most frequently for time and traffic estimation, regardless of ability. Finally, people reported using GPS less, regardless of ability, when they were more familiar with an environment. Collectively these findings suggest that people moderate their use of GPS, depending on their knowledge, ability, and confidence in their own abilities, and often report using GPS to augment rather than replace spatial environmental knowledge.
{"title":"How is GPS used? Understanding navigation system use and its relation to spatial ability.","authors":"Alexis Topete, Chuanxiuyue He, John Protzko, Jonathan Schooler, Mary Hegarty","doi":"10.1186/s41235-024-00545-x","DOIUrl":"10.1186/s41235-024-00545-x","url":null,"abstract":"<p><p>Given how commonly GPS is now used in everyday navigation, it is surprising how little research has been dedicated to investigating variations in its use and how such variations may relate to navigation ability. The present study investigated general GPS dependence, how people report using GPS in various navigational scenarios, and the relationship between these measures and spatial abilities (assessed by self-report measures and the ability to learn the layout of a novel environment). GPS dependence is an individual's perceived need to use GPS in navigation, and GPS usage is the frequency with which they report using different functions of GPS. The study also assessed whether people modulate reported use of GPS as a function of their familiarity with the location in which they are navigating. In 249 participants over two preregistered studies, reported GPS dependence was negatively correlated with objective navigation performance and self-reported sense of direction, and positively correlated with spatial anxiety. Greater reported use of GPS for turn-by-turn directions was associated with a poorer sense of direction and higher spatial anxiety. People reported using GPS most frequently for time and traffic estimation, regardless of ability. Finally, people reported using GPS less, regardless of ability, when they were more familiar with an environment. Collectively these findings suggest that people moderate their use of GPS, depending on their knowledge, ability, and confidence in their own abilities, and often report using GPS to augment rather than replace spatial environmental knowledge.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"16"},"PeriodicalIF":4.1,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951145/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140177159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-18DOI: 10.1186/s41235-024-00538-w
Carlos J Desme, Anthony S Dick, Timothy B Hayes, Shannon M Pruden
Spatial ability is defined as a cognitive or intellectual skill used to represent, transform, generate, and recall information of an object or the environment. Individual differences across spatial tasks have been strongly linked to science, technology, engineering, and mathematics (STEM) interest and success. Several variables have been proposed to explain individual differences in spatial ability, including affective factors such as one's confidence and anxiety. However, research is lacking on whether affective variables such as confidence and anxiety relate to individual differences in both a mental rotation task (MRT) and a perspective-taking and spatial orientation task (PTSOT). Using a sample of 100 college students completing introductory STEM courses, the present study investigated the effects of self-reported spatial confidence, spatial anxiety, and general anxiety on MRT and PTSOT. Spatial confidence, after controlling for effects of general anxiety and biological sex, was significantly related to performance on both the MRT and PTSOT. Spatial anxiety, after controlling for effects of general anxiety and biological sex, was not related to either PTSOT or MRT scores. Together these findings suggest some affective factors, but not others, contribute to spatial ability performance to a degree that merits advanced investigation in future studies.
{"title":"Individual differences in emerging adults' spatial abilities: What role do affective factors play?","authors":"Carlos J Desme, Anthony S Dick, Timothy B Hayes, Shannon M Pruden","doi":"10.1186/s41235-024-00538-w","DOIUrl":"10.1186/s41235-024-00538-w","url":null,"abstract":"<p><p>Spatial ability is defined as a cognitive or intellectual skill used to represent, transform, generate, and recall information of an object or the environment. Individual differences across spatial tasks have been strongly linked to science, technology, engineering, and mathematics (STEM) interest and success. Several variables have been proposed to explain individual differences in spatial ability, including affective factors such as one's confidence and anxiety. However, research is lacking on whether affective variables such as confidence and anxiety relate to individual differences in both a mental rotation task (MRT) and a perspective-taking and spatial orientation task (PTSOT). Using a sample of 100 college students completing introductory STEM courses, the present study investigated the effects of self-reported spatial confidence, spatial anxiety, and general anxiety on MRT and PTSOT. Spatial confidence, after controlling for effects of general anxiety and biological sex, was significantly related to performance on both the MRT and PTSOT. Spatial anxiety, after controlling for effects of general anxiety and biological sex, was not related to either PTSOT or MRT scores. Together these findings suggest some affective factors, but not others, contribute to spatial ability performance to a degree that merits advanced investigation in future studies.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"13"},"PeriodicalIF":4.1,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948719/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140159267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-18DOI: 10.1186/s41235-024-00544-y
Niklas Dietze, Lukas Recker, Christian H Poth
{"title":"Correction: Warning signals only support the first action in a sequence.","authors":"Niklas Dietze, Lukas Recker, Christian H Poth","doi":"10.1186/s41235-024-00544-y","DOIUrl":"10.1186/s41235-024-00544-y","url":null,"abstract":"","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"12"},"PeriodicalIF":4.1,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948695/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140159266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-27DOI: 10.1186/s41235-024-00540-2
Marc Brysbaert
Experimental psychology is witnessing an increase in research on individual differences, which requires the development of new tasks that can reliably assess variations among participants. To do this, cognitive researchers need statistical methods that many researchers have not learned during their training. The lack of expertise can pose challenges not only in designing good, new tasks but also in evaluating tasks developed by others. To bridge the gap, this article provides an overview of test psychology applied to performance tasks, covering fundamental concepts such as standardization, reliability, norming and validity. It provides practical guidelines for developing and evaluating experimental tasks, as well as for combining tasks to better understand individual differences. To further address common misconceptions, the article lists 11 prevailing myths. The purpose of this guide is to provide experimental psychologists with the knowledge and tools needed to conduct rigorous and insightful studies of individual differences.
{"title":"Designing and evaluating tasks to measure individual differences in experimental psychology: a tutorial.","authors":"Marc Brysbaert","doi":"10.1186/s41235-024-00540-2","DOIUrl":"10.1186/s41235-024-00540-2","url":null,"abstract":"<p><p>Experimental psychology is witnessing an increase in research on individual differences, which requires the development of new tasks that can reliably assess variations among participants. To do this, cognitive researchers need statistical methods that many researchers have not learned during their training. The lack of expertise can pose challenges not only in designing good, new tasks but also in evaluating tasks developed by others. To bridge the gap, this article provides an overview of test psychology applied to performance tasks, covering fundamental concepts such as standardization, reliability, norming and validity. It provides practical guidelines for developing and evaluating experimental tasks, as well as for combining tasks to better understand individual differences. To further address common misconceptions, the article lists 11 prevailing myths. The purpose of this guide is to provide experimental psychologists with the knowledge and tools needed to conduct rigorous and insightful studies of individual differences.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"11"},"PeriodicalIF":3.4,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10899130/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139973882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-16DOI: 10.1186/s41235-024-00533-1
Luke Strickland, Simon Farrell, Micah K Wilson, Jack Hutchinson, Shayne Loft
In a range of settings, human operators make decisions with the assistance of automation, the reliability of which can vary depending upon context. Currently, the processes by which humans track the level of reliability of automation are unclear. In the current study, we test cognitive models of learning that could potentially explain how humans track automation reliability. We fitted several alternative cognitive models to a series of participants' judgements of automation reliability observed in a maritime classification task in which participants were provided with automated advice. We examined three experiments including eight between-subjects conditions and 240 participants in total. Our results favoured a two-kernel delta-rule model of learning, which specifies that humans learn by prediction error, and respond according to a learning rate that is sensitive to environmental volatility. However, we found substantial heterogeneity in learning processes across participants. These outcomes speak to the learning processes underlying how humans estimate automation reliability and thus have implications for practice.
{"title":"How do humans learn about the reliability of automation?","authors":"Luke Strickland, Simon Farrell, Micah K Wilson, Jack Hutchinson, Shayne Loft","doi":"10.1186/s41235-024-00533-1","DOIUrl":"10.1186/s41235-024-00533-1","url":null,"abstract":"<p><p>In a range of settings, human operators make decisions with the assistance of automation, the reliability of which can vary depending upon context. Currently, the processes by which humans track the level of reliability of automation are unclear. In the current study, we test cognitive models of learning that could potentially explain how humans track automation reliability. We fitted several alternative cognitive models to a series of participants' judgements of automation reliability observed in a maritime classification task in which participants were provided with automated advice. We examined three experiments including eight between-subjects conditions and 240 participants in total. Our results favoured a two-kernel delta-rule model of learning, which specifies that humans learn by prediction error, and respond according to a learning rate that is sensitive to environmental volatility. However, we found substantial heterogeneity in learning processes across participants. These outcomes speak to the learning processes underlying how humans estimate automation reliability and thus have implications for practice.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"8"},"PeriodicalIF":4.1,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10869332/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139742302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-16DOI: 10.1186/s41235-024-00535-z
Sümeyra Tosun
Machine translation (MT) is the automated process of translating text between different languages, encompassing a wide range of language pairs. This study focuses on non-professional bilingual speakers of Turkish and English, aiming to assess their ability to discern accuracy in machine translations and their preferences regarding MT. A particular emphasis is placed on the linguistically subtle yet semantically meaningful concept of evidentiality. In this experimental investigation, 36 Turkish-English bilinguals, comprising both early and late bilinguals, were presented with simple declarative sentences. These sentences varied in their evidential meaning, distinguishing between firsthand and non-firsthand evidence. The participants were then provided with MT of these sentences in both translation directions (Turkish to English and English to Turkish) and asked to identify the accuracy of these translations. Additionally, participants were queried about their preference for MT in four crucial domains: medical, legal, academic, and daily contexts. The findings of this study indicated that late bilinguals exhibited a superior ability to detect translation accuracy, particularly in the case of firsthand evidence translations, compared to their early bilingual counterparts. Concerning the preference for MT, age of acquisition and the accuracy detection of non-firsthand sentence translations emerged as significant predictors.
{"title":"Machine translation: Turkish-English bilingual speakers' accuracy detection of evidentiality and preference of MT.","authors":"Sümeyra Tosun","doi":"10.1186/s41235-024-00535-z","DOIUrl":"10.1186/s41235-024-00535-z","url":null,"abstract":"<p><p>Machine translation (MT) is the automated process of translating text between different languages, encompassing a wide range of language pairs. This study focuses on non-professional bilingual speakers of Turkish and English, aiming to assess their ability to discern accuracy in machine translations and their preferences regarding MT. A particular emphasis is placed on the linguistically subtle yet semantically meaningful concept of evidentiality. In this experimental investigation, 36 Turkish-English bilinguals, comprising both early and late bilinguals, were presented with simple declarative sentences. These sentences varied in their evidential meaning, distinguishing between firsthand and non-firsthand evidence. The participants were then provided with MT of these sentences in both translation directions (Turkish to English and English to Turkish) and asked to identify the accuracy of these translations. Additionally, participants were queried about their preference for MT in four crucial domains: medical, legal, academic, and daily contexts. The findings of this study indicated that late bilinguals exhibited a superior ability to detect translation accuracy, particularly in the case of firsthand evidence translations, compared to their early bilingual counterparts. Concerning the preference for MT, age of acquisition and the accuracy detection of non-firsthand sentence translations emerged as significant predictors.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"10"},"PeriodicalIF":4.1,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10873250/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139747556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Correction: The role of leadership level in college students' facial emotion recognition: evidence from event-related potential analysis.","authors":"Huang Gu, Shunshun Du, Peipei Jin, Chengming Wang, Hui He, Mingnan Zhao","doi":"10.1186/s41235-024-00536-y","DOIUrl":"10.1186/s41235-024-00536-y","url":null,"abstract":"","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"9"},"PeriodicalIF":4.1,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10873252/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139747555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-14DOI: 10.1186/s41235-024-00534-0
Ellie Hewer, Michael B Lewis
Studies show that surgical face masks can have both positive and negative effects on attractiveness. Race has been implicated as a moderator of the size of this mask effect. Here, the moderating effects of expression, race and gender are explored. The mask effect was more positive for males than for females, for neutral faces than for smiling faces, and there were differences between the races. Further, the effect of unmasked attractiveness was partialled out for each image, which removed the race effects, but the gender and expression effects remained. It is suggested that racial differences previously observed in the mask effects are a consequence of differences in attractiveness of the faces sampled from those races. Re-analysis of previous research that showed race effects also demonstrates how they are better explained as attractiveness effects rather than race effects. This explanation can provide order to the different findings observed across the literature.
{"title":"Unveiling why race does not affect the mask effect on attractiveness: but gender and expression do.","authors":"Ellie Hewer, Michael B Lewis","doi":"10.1186/s41235-024-00534-0","DOIUrl":"10.1186/s41235-024-00534-0","url":null,"abstract":"<p><p>Studies show that surgical face masks can have both positive and negative effects on attractiveness. Race has been implicated as a moderator of the size of this mask effect. Here, the moderating effects of expression, race and gender are explored. The mask effect was more positive for males than for females, for neutral faces than for smiling faces, and there were differences between the races. Further, the effect of unmasked attractiveness was partialled out for each image, which removed the race effects, but the gender and expression effects remained. It is suggested that racial differences previously observed in the mask effects are a consequence of differences in attractiveness of the faces sampled from those races. Re-analysis of previous research that showed race effects also demonstrates how they are better explained as attractiveness effects rather than race effects. This explanation can provide order to the different findings observed across the literature.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"7"},"PeriodicalIF":4.1,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10866822/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139730676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-02DOI: 10.1186/s41235-024-00531-3
Jiali Song, Benjamin Wolfe
The low prevalence effect (LPE) is a cognitive limitation commonly found in visual search tasks, in which observers miss rare targets. Drivers looking for road hazards are also subject to the LPE. However, not all road hazards are equal; a paper bag floating down the road is much less dangerous than a rampaging moose. Here, we asked whether perceived hazardousness modulated the LPE. To examine this, we took a dataset in which 48 raters assessed the perceived dangerousness of hazards in recorded road videos (Song et al. in Behav Res Methods, 2023. https://doi.org/10.3758/s13428-023-02299-8 ) and correlated the ratings with data from a hazard detection task using the same stimuli with varying hazard prevalence rates (Kosovicheva et al. in Psychon Bull Rev 30(1):212-223, 2023. https://doi.org/10.3758/s13423-022-02159-0 ). We found that while hazard detectability increased monotonically with hazardousness ratings, the LPE was comparable across perceived hazardousness levels. Our findings are consistent with the decision criterion account of the LPE, in which target rarity induces a conservative shift in criterion. Importantly, feedback was necessary for a large and consistent LPE; when participants were not given feedback about their accuracy, the most dangerous hazards showed a non-significant LPE. However, eliminating feedback was not enough to induce the opposite of the LPE-prevalence induced concept change (Levari et al. in Science 360(6396):1465-1467, 2018. https://doi.org/10.1126/science.aap8731 ), in which participants adopt a more liberal criterion when instances of a category become rare. Our results suggest that the road hazard LPE may be somewhat affected by the inherent variability of driving situations, but is still observed for highly dangerous hazards.
{"title":"Highly dangerous road hazards are not immune from the low prevalence effect.","authors":"Jiali Song, Benjamin Wolfe","doi":"10.1186/s41235-024-00531-3","DOIUrl":"10.1186/s41235-024-00531-3","url":null,"abstract":"<p><p>The low prevalence effect (LPE) is a cognitive limitation commonly found in visual search tasks, in which observers miss rare targets. Drivers looking for road hazards are also subject to the LPE. However, not all road hazards are equal; a paper bag floating down the road is much less dangerous than a rampaging moose. Here, we asked whether perceived hazardousness modulated the LPE. To examine this, we took a dataset in which 48 raters assessed the perceived dangerousness of hazards in recorded road videos (Song et al. in Behav Res Methods, 2023. https://doi.org/10.3758/s13428-023-02299-8 ) and correlated the ratings with data from a hazard detection task using the same stimuli with varying hazard prevalence rates (Kosovicheva et al. in Psychon Bull Rev 30(1):212-223, 2023. https://doi.org/10.3758/s13423-022-02159-0 ). We found that while hazard detectability increased monotonically with hazardousness ratings, the LPE was comparable across perceived hazardousness levels. Our findings are consistent with the decision criterion account of the LPE, in which target rarity induces a conservative shift in criterion. Importantly, feedback was necessary for a large and consistent LPE; when participants were not given feedback about their accuracy, the most dangerous hazards showed a non-significant LPE. However, eliminating feedback was not enough to induce the opposite of the LPE-prevalence induced concept change (Levari et al. in Science 360(6396):1465-1467, 2018. https://doi.org/10.1126/science.aap8731 ), in which participants adopt a more liberal criterion when instances of a category become rare. Our results suggest that the road hazard LPE may be somewhat affected by the inherent variability of driving situations, but is still observed for highly dangerous hazards.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"6"},"PeriodicalIF":4.1,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10834906/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139673285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-02DOI: 10.1186/s41235-024-00532-2
Kay L Ritchie, Daniel J Carragher, Josh P Davis, Katie Read, Ryan E Jenkins, Eilidh Noyes, Katie L H Gray, Peter J B Hancock
Mask wearing has been required in various settings since the outbreak of COVID-19, and research has shown that identity judgements are difficult for faces wearing masks. To date, however, the majority of experiments on face identification with masked faces tested humans and computer algorithms using images with superimposed masks rather than images of people wearing real face coverings. In three experiments we test humans (control participants and super-recognisers) and algorithms with images showing different types of face coverings. In all experiments we tested matching concealed or unconcealed faces to an unconcealed reference image, and we found a consistent decrease in face matching accuracy with masked compared to unconcealed faces. In Experiment 1, typical human observers were most accurate at face matching with unconcealed images, and poorer for three different types of superimposed mask conditions. In Experiment 2, we tested both typical observers and super-recognisers with superimposed and real face masks, and found that performance was poorer for real compared to superimposed masks. The same pattern was observed in Experiment 3 with algorithms. Our results highlight the importance of testing both humans and algorithms with real face masks, as using only superimposed masks may underestimate their detrimental effect on face identification.
{"title":"Face masks and fake masks: the effect of real and superimposed masks on face matching with super-recognisers, typical observers, and algorithms.","authors":"Kay L Ritchie, Daniel J Carragher, Josh P Davis, Katie Read, Ryan E Jenkins, Eilidh Noyes, Katie L H Gray, Peter J B Hancock","doi":"10.1186/s41235-024-00532-2","DOIUrl":"10.1186/s41235-024-00532-2","url":null,"abstract":"<p><p>Mask wearing has been required in various settings since the outbreak of COVID-19, and research has shown that identity judgements are difficult for faces wearing masks. To date, however, the majority of experiments on face identification with masked faces tested humans and computer algorithms using images with superimposed masks rather than images of people wearing real face coverings. In three experiments we test humans (control participants and super-recognisers) and algorithms with images showing different types of face coverings. In all experiments we tested matching concealed or unconcealed faces to an unconcealed reference image, and we found a consistent decrease in face matching accuracy with masked compared to unconcealed faces. In Experiment 1, typical human observers were most accurate at face matching with unconcealed images, and poorer for three different types of superimposed mask conditions. In Experiment 2, we tested both typical observers and super-recognisers with superimposed and real face masks, and found that performance was poorer for real compared to superimposed masks. The same pattern was observed in Experiment 3 with algorithms. Our results highlight the importance of testing both humans and algorithms with real face masks, as using only superimposed masks may underestimate their detrimental effect on face identification.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"5"},"PeriodicalIF":4.1,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10834892/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139673284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}