Pub Date : 2024-01-08DOI: 10.1186/s41235-023-00525-7
Lindsay A Santacroce, Benjamin J Tamber-Rosenau
Crises such as natural disasters or pandemics negatively impact the mental health of the affected community, increasing rates of depression, anxiety, or stress. It has been proposed that this stems in part from crisis-related stimuli triggering negative reactions that interrupt daily life. Given the frequency and prominence of crisis events, it is crucial to understand when crisis-related stimuli involuntarily capture attention and trigger increased stress and distraction from obligations. The emotional attentional blink (EAB) paradigm-in which emotional distractors hinder report of subsequent targets in streams of rapidly displayed stimuli-allows examination of such attentional capture in a rapidly changing dynamic environment. EABs are typically observed with generally disturbing stimuli, but stimuli related to personal traumas yield similar or greater effects, indicating strong attentional capture by stimuli related to individual trauma history. The current study investigated whether a similar comparable or increased crisis-related EAB exists within a community affected by large-scale crisis. Specifically, effects of conventional emotional distractors and distractors related to recent crises were compared using EABs in university students without a mental health diagnosis. Experiment 1 used images related to Hurricane Harvey, evaluating a crisis 4 years prior to data collection. Experiment 2 used words related to the COVID pandemic, evaluating an ongoing crisis at the time of data collection. In both experiments, the conventional EAB distractors yielded strong EABs, while the crisis-related distractors yielded absent or weak EABs in the same participants. This suggests that crisis-related stimuli do not have special potency for capturing attention in the general university student population. More generally, crises affecting communities do not necessarily yield widespread, strong reactivity to crisis-related stimuli.
{"title":"Crisis-related stimuli do not increase the emotional attentional blink in a general university student population.","authors":"Lindsay A Santacroce, Benjamin J Tamber-Rosenau","doi":"10.1186/s41235-023-00525-7","DOIUrl":"10.1186/s41235-023-00525-7","url":null,"abstract":"<p><p>Crises such as natural disasters or pandemics negatively impact the mental health of the affected community, increasing rates of depression, anxiety, or stress. It has been proposed that this stems in part from crisis-related stimuli triggering negative reactions that interrupt daily life. Given the frequency and prominence of crisis events, it is crucial to understand when crisis-related stimuli involuntarily capture attention and trigger increased stress and distraction from obligations. The emotional attentional blink (EAB) paradigm-in which emotional distractors hinder report of subsequent targets in streams of rapidly displayed stimuli-allows examination of such attentional capture in a rapidly changing dynamic environment. EABs are typically observed with generally disturbing stimuli, but stimuli related to personal traumas yield similar or greater effects, indicating strong attentional capture by stimuli related to individual trauma history. The current study investigated whether a similar comparable or increased crisis-related EAB exists within a community affected by large-scale crisis. Specifically, effects of conventional emotional distractors and distractors related to recent crises were compared using EABs in university students without a mental health diagnosis. Experiment 1 used images related to Hurricane Harvey, evaluating a crisis 4 years prior to data collection. Experiment 2 used words related to the COVID pandemic, evaluating an ongoing crisis at the time of data collection. In both experiments, the conventional EAB distractors yielded strong EABs, while the crisis-related distractors yielded absent or weak EABs in the same participants. This suggests that crisis-related stimuli do not have special potency for capturing attention in the general university student population. More generally, crises affecting communities do not necessarily yield widespread, strong reactivity to crisis-related stimuli.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"3"},"PeriodicalIF":4.1,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10774501/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139404782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-08DOI: 10.1186/s41235-023-00530-w
Chiara Valzolgher, Sara Capra, Elena Gessa, Tommaso Rosi, Elena Giovanelli, Francesco Pavani
Localizing sounds in noisy environments can be challenging. Here, we reproduce real-life soundscapes to investigate the effects of environmental noise on sound localization experience. We evaluated participants' performance and metacognitive assessments, including measures of sound localization effort and confidence, while also tracking their spontaneous head movements. Normal-hearing participants (N = 30) were engaged in a speech-localization task conducted in three common soundscapes that progressively increased in complexity: nature, traffic, and a cocktail party setting. To control visual information and measure behaviors, we used visual virtual reality technology. The results revealed that the complexity of the soundscape had an impact on both performance errors and metacognitive evaluations. Participants reported increased effort and reduced confidence for sound localization in more complex noise environments. On the contrary, the level of soundscape complexity did not influence the use of spontaneous exploratory head-related behaviors. We also observed that, irrespective of the noisy condition, participants who implemented a higher number of head rotations and explored a wider extent of space by rotating their heads made lower localization errors. Interestingly, we found preliminary evidence that an increase in spontaneous head movements, specifically the extent of head rotation, leads to a decrease in perceived effort and an increase in confidence at the single-trial level. These findings expand previous observations regarding sound localization in noisy environments by broadening the perspective to also include metacognitive evaluations, exploratory behaviors and their interactions.
{"title":"Sound localization in noisy contexts: performance, metacognitive evaluations and head movements.","authors":"Chiara Valzolgher, Sara Capra, Elena Gessa, Tommaso Rosi, Elena Giovanelli, Francesco Pavani","doi":"10.1186/s41235-023-00530-w","DOIUrl":"10.1186/s41235-023-00530-w","url":null,"abstract":"<p><p>Localizing sounds in noisy environments can be challenging. Here, we reproduce real-life soundscapes to investigate the effects of environmental noise on sound localization experience. We evaluated participants' performance and metacognitive assessments, including measures of sound localization effort and confidence, while also tracking their spontaneous head movements. Normal-hearing participants (N = 30) were engaged in a speech-localization task conducted in three common soundscapes that progressively increased in complexity: nature, traffic, and a cocktail party setting. To control visual information and measure behaviors, we used visual virtual reality technology. The results revealed that the complexity of the soundscape had an impact on both performance errors and metacognitive evaluations. Participants reported increased effort and reduced confidence for sound localization in more complex noise environments. On the contrary, the level of soundscape complexity did not influence the use of spontaneous exploratory head-related behaviors. We also observed that, irrespective of the noisy condition, participants who implemented a higher number of head rotations and explored a wider extent of space by rotating their heads made lower localization errors. Interestingly, we found preliminary evidence that an increase in spontaneous head movements, specifically the extent of head rotation, leads to a decrease in perceived effort and an increase in confidence at the single-trial level. These findings expand previous observations regarding sound localization in noisy environments by broadening the perspective to also include metacognitive evaluations, exploratory behaviors and their interactions.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"4"},"PeriodicalIF":4.1,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10774233/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139404783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-07DOI: 10.1186/s41235-023-00529-3
Ujué Agudo, Karlos G Liberal, Miren Arrese, Helena Matute
Automated decision-making is becoming increasingly common in the public sector. As a result, political institutions recommend the presence of humans in these decision-making processes as a safeguard against potentially erroneous or biased algorithmic decisions. However, the scientific literature on human-in-the-loop performance is not conclusive about the benefits and risks of such human presence, nor does it clarify which aspects of this human-computer interaction may influence the final decision. In two experiments, we simulate an automated decision-making process in which participants judge multiple defendants in relation to various crimes, and we manipulate the time in which participants receive support from a supposed automated system with Artificial Intelligence (before or after they make their judgments). Our results show that human judgment is affected when participants receive incorrect algorithmic support, particularly when they receive it before providing their own judgment, resulting in reduced accuracy. The data and materials for these experiments are freely available at the Open Science Framework: https://osf.io/b6p4z/ Experiment 2 was preregistered.
{"title":"The impact of AI errors in a human-in-the-loop process.","authors":"Ujué Agudo, Karlos G Liberal, Miren Arrese, Helena Matute","doi":"10.1186/s41235-023-00529-3","DOIUrl":"10.1186/s41235-023-00529-3","url":null,"abstract":"<p><p>Automated decision-making is becoming increasingly common in the public sector. As a result, political institutions recommend the presence of humans in these decision-making processes as a safeguard against potentially erroneous or biased algorithmic decisions. However, the scientific literature on human-in-the-loop performance is not conclusive about the benefits and risks of such human presence, nor does it clarify which aspects of this human-computer interaction may influence the final decision. In two experiments, we simulate an automated decision-making process in which participants judge multiple defendants in relation to various crimes, and we manipulate the time in which participants receive support from a supposed automated system with Artificial Intelligence (before or after they make their judgments). Our results show that human judgment is affected when participants receive incorrect algorithmic support, particularly when they receive it before providing their own judgment, resulting in reduced accuracy. The data and materials for these experiments are freely available at the Open Science Framework: https://osf.io/b6p4z/ Experiment 2 was preregistered.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"1"},"PeriodicalIF":4.1,"publicationDate":"2024-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10772030/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139378521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-20DOI: 10.1186/s41235-023-00527-5
Joel M Cooper, Kaedyn W Crabtree, Amy S McDonnell, Dominik May, Sean C Strayer, Tushig Tsogtbaatar, Danielle R Cook, Parker A Alexander, David M Sanbonmatsu, David L Strayer
Vehicle automation is becoming more prevalent. Understanding how drivers use this technology and its safety implications is crucial. In a 6-8 week naturalistic study, we leveraged a hybrid naturalistic driving research design to evaluate driver behavior with Level 2 vehicle automation, incorporating unique naturalistic and experimental control conditions. Our investigation covered four main areas: automation usage, system warnings, driving demand, and driver arousal, as well as secondary task engagement. While on the interstate, drivers were advised to engage Level 2 automation whenever they deemed it safe, and they complied by using it over 70% of the time. Interestingly, the frequency of system warnings increased with prolonged use, suggesting an evolving relationship between drivers and the automation features. Our data also revealed that drivers were discerning in their use of automation, opting for manual control under high driving demand conditions. Contrary to common safety concerns, our data indicated no significant rise in driver fatigue or fidgeting when using automation, compared to a control condition. Additionally, observed patterns of engagement in secondary tasks like radio listening and text messaging challenge existing assumptions about automation leading to dangerous driver distraction. Overall, our findings provide new insights into the conditions under which drivers opt to use automation and reveal a nuanced behavioral profile that emerges when automation is in use.
{"title":"Driver behavior while using Level 2 vehicle automation: a hybrid naturalistic study.","authors":"Joel M Cooper, Kaedyn W Crabtree, Amy S McDonnell, Dominik May, Sean C Strayer, Tushig Tsogtbaatar, Danielle R Cook, Parker A Alexander, David M Sanbonmatsu, David L Strayer","doi":"10.1186/s41235-023-00527-5","DOIUrl":"10.1186/s41235-023-00527-5","url":null,"abstract":"<p><p>Vehicle automation is becoming more prevalent. Understanding how drivers use this technology and its safety implications is crucial. In a 6-8 week naturalistic study, we leveraged a hybrid naturalistic driving research design to evaluate driver behavior with Level 2 vehicle automation, incorporating unique naturalistic and experimental control conditions. Our investigation covered four main areas: automation usage, system warnings, driving demand, and driver arousal, as well as secondary task engagement. While on the interstate, drivers were advised to engage Level 2 automation whenever they deemed it safe, and they complied by using it over 70% of the time. Interestingly, the frequency of system warnings increased with prolonged use, suggesting an evolving relationship between drivers and the automation features. Our data also revealed that drivers were discerning in their use of automation, opting for manual control under high driving demand conditions. Contrary to common safety concerns, our data indicated no significant rise in driver fatigue or fidgeting when using automation, compared to a control condition. Additionally, observed patterns of engagement in secondary tasks like radio listening and text messaging challenge existing assumptions about automation leading to dangerous driver distraction. Overall, our findings provide new insights into the conditions under which drivers opt to use automation and reveal a nuanced behavioral profile that emerges when automation is in use.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"8 1","pages":"71"},"PeriodicalIF":4.1,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10733274/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138797606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-20DOI: 10.1186/s41235-023-00526-6
Patrick P Weis, Wilfried Kunde
With ubiquitous computing, problems can be solved using more strategies than ever, though many strategies feature subpar performance. Here, we explored whether and how simple advice regarding when to use which strategy can improve performance. Specifically, we presented unfamiliar alphanumeric equations (e.g., A + 5 = F) and asked whether counting up the alphabet from the left letter by the indicated number resulted in the right letter. In an initial choice block, participants could engage in one of three cognitive strategies: (a) internal counting, (b) internal retrieval of previously generated solutions, or (c) computer-mediated external retrieval of solutions. Participants belonged to one of two groups: they were either instructed to first try internal retrieval before using external retrieval, or received no specific use instructions. In a subsequent internal block with identical instructions for both groups, external retrieval was made unavailable. The 'try internal retrieval first' instruction in the choice block led to pronounced benefits (d = .76) in the internal block. Benefits were due to facilitated creation and retrieval of internal memory traces and possibly also due to improved strategy choice. These results showcase how simple strategy advice can greatly help users navigate cognitive environments. More generally, our results also imply that uninformed use of external tools (i.e., technology) can bear the risk of not developing and using even more superior internal processing strategies.
{"title":"Overreliance on inefficient computer-mediated information retrieval is countermanded by strategy advice that promotes memory-mediated retrieval.","authors":"Patrick P Weis, Wilfried Kunde","doi":"10.1186/s41235-023-00526-6","DOIUrl":"10.1186/s41235-023-00526-6","url":null,"abstract":"<p><p>With ubiquitous computing, problems can be solved using more strategies than ever, though many strategies feature subpar performance. Here, we explored whether and how simple advice regarding when to use which strategy can improve performance. Specifically, we presented unfamiliar alphanumeric equations (e.g., A + 5 = F) and asked whether counting up the alphabet from the left letter by the indicated number resulted in the right letter. In an initial choice block, participants could engage in one of three cognitive strategies: (a) internal counting, (b) internal retrieval of previously generated solutions, or (c) computer-mediated external retrieval of solutions. Participants belonged to one of two groups: they were either instructed to first try internal retrieval before using external retrieval, or received no specific use instructions. In a subsequent internal block with identical instructions for both groups, external retrieval was made unavailable. The 'try internal retrieval first' instruction in the choice block led to pronounced benefits (d = .76) in the internal block. Benefits were due to facilitated creation and retrieval of internal memory traces and possibly also due to improved strategy choice. These results showcase how simple strategy advice can greatly help users navigate cognitive environments. More generally, our results also imply that uninformed use of external tools (i.e., technology) can bear the risk of not developing and using even more superior internal processing strategies.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"8 1","pages":"72"},"PeriodicalIF":4.1,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10733273/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138798291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While the role of emotion in leadership practice is well-acknowledged, there is still a lack of clarity regarding the behavioral distinctions between individuals with varying levels of leadership and the underlying neurocognitive mechanisms at play. This study utilizes facial emotion recognition in conjunction with electroencephalograms to explore the temporal dynamics of facial emotion recognition processes among college students with high and low levels of leadership. The results showed no significant differences in the amplitude of P1 during the early stage of facial emotion recognition between the two groups. In the middle stage of facial emotion recognition, the main effect of group was significant on the N170 component, with higher N170 amplitude evoked in high-leadership students than low-leadership students. In the late stage of facial emotion recognition, low-leadership students evoked greater LPP amplitude in the temporal-parietal lobe when recognizing happy facial emotions compared to high-leadership students. In addition, time-frequency results revealed a difference in the alpha frequency band, with high-leadership students exhibiting lower alpha power than low-leadership students. The results suggest differences in the brain temporal courses of facial emotion recognition between students with different leadership levels, which are mainly manifested in the middle stage of structural encoding and the late stage of delicate emotional processing during facial emotion recognition.
{"title":"The role of leadership level in college students' facial emotion recognition: evidence from event-related potential analysis.","authors":"Huang Gu, Shunshun Du, Peipei Jin, Chengming Wang, Hui He, Mingnan Zhao","doi":"10.1186/s41235-023-00523-9","DOIUrl":"10.1186/s41235-023-00523-9","url":null,"abstract":"<p><p>While the role of emotion in leadership practice is well-acknowledged, there is still a lack of clarity regarding the behavioral distinctions between individuals with varying levels of leadership and the underlying neurocognitive mechanisms at play. This study utilizes facial emotion recognition in conjunction with electroencephalograms to explore the temporal dynamics of facial emotion recognition processes among college students with high and low levels of leadership. The results showed no significant differences in the amplitude of P1 during the early stage of facial emotion recognition between the two groups. In the middle stage of facial emotion recognition, the main effect of group was significant on the N170 component, with higher N170 amplitude evoked in high-leadership students than low-leadership students. In the late stage of facial emotion recognition, low-leadership students evoked greater LPP amplitude in the temporal-parietal lobe when recognizing happy facial emotions compared to high-leadership students. In addition, time-frequency results revealed a difference in the alpha frequency band, with high-leadership students exhibiting lower alpha power than low-leadership students. The results suggest differences in the brain temporal courses of facial emotion recognition between students with different leadership levels, which are mainly manifested in the middle stage of structural encoding and the late stage of delicate emotional processing during facial emotion recognition.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"8 1","pages":"73"},"PeriodicalIF":3.4,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10733243/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138799432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-22DOI: 10.1186/s41235-023-00524-8
Tuo Liu, Jie Sui, Andrea Hildebrandt
The self, like the concept of central "gravity", facilitates the processing of information that is directly relevant to the self. This phenomenon is known as the self-prioritization effect. However, it remains unclear whether the self-prioritization effect extends to the processing of emotional facial expressions. To fill this gap, we used a self-association paradigm to investigate the impact of self-relevance on the recognition of emotional facial expressions while controlling for confounding factors such as familiarity and overlearning. Using a large and diverse sample, we replicated the effect of self-relevance on face processing but found no evidence for a modulation of self-relevance on facial emotion recognition. We propose two potential theoretical explanations to account for these findings and emphasize that further research with different experimental designs and a multitasks measurement approach is needed to understand this mechanism fully. Overall, our study contributes to the literature on the parallel cognitive processing of self-relevance and facial emotion recognition, with implications for both social and cognitive psychology.
{"title":"To see or not to see: the parallel processing of self-relevance and facial expressions.","authors":"Tuo Liu, Jie Sui, Andrea Hildebrandt","doi":"10.1186/s41235-023-00524-8","DOIUrl":"10.1186/s41235-023-00524-8","url":null,"abstract":"<p><p>The self, like the concept of central \"gravity\", facilitates the processing of information that is directly relevant to the self. This phenomenon is known as the self-prioritization effect. However, it remains unclear whether the self-prioritization effect extends to the processing of emotional facial expressions. To fill this gap, we used a self-association paradigm to investigate the impact of self-relevance on the recognition of emotional facial expressions while controlling for confounding factors such as familiarity and overlearning. Using a large and diverse sample, we replicated the effect of self-relevance on face processing but found no evidence for a modulation of self-relevance on facial emotion recognition. We propose two potential theoretical explanations to account for these findings and emphasize that further research with different experimental designs and a multitasks measurement approach is needed to understand this mechanism fully. Overall, our study contributes to the literature on the parallel cognitive processing of self-relevance and facial emotion recognition, with implications for both social and cognitive psychology.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"8 1","pages":"70"},"PeriodicalIF":4.1,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10665284/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138292057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-19DOI: 10.1186/s41235-023-00519-5
Colleen E Patton, Christopher D Wickens, C A P Smith, Kayla M Noble, Benjamin A Clegg
In a dynamic decision-making task simulating basic ship movements, participants attempted, through a series of actions, to elicit and identify which one of six other ships was exhibiting either of two hostile behaviors. A high-performing, although imperfect, automated attention aid was introduced. It visually highlighted the ship categorized by an algorithm as the most likely to be hostile. Half of participants also received automation transparency in the form of a statement about why the hostile ship was highlighted. Results indicated that while the aid's advice was often complied with and hence led to higher accuracy with a shorter response time, detection was still suboptimal. Additionally, transparency had limited impacts on all aspects of performance. Implications for detection of hostile intentions and the challenges of supporting dynamic decision making are discussed.
{"title":"Supporting detection of hostile intentions: automated assistance in a dynamic decision-making context.","authors":"Colleen E Patton, Christopher D Wickens, C A P Smith, Kayla M Noble, Benjamin A Clegg","doi":"10.1186/s41235-023-00519-5","DOIUrl":"10.1186/s41235-023-00519-5","url":null,"abstract":"<p><p>In a dynamic decision-making task simulating basic ship movements, participants attempted, through a series of actions, to elicit and identify which one of six other ships was exhibiting either of two hostile behaviors. A high-performing, although imperfect, automated attention aid was introduced. It visually highlighted the ship categorized by an algorithm as the most likely to be hostile. Half of participants also received automation transparency in the form of a statement about why the hostile ship was highlighted. Results indicated that while the aid's advice was often complied with and hence led to higher accuracy with a shorter response time, detection was still suboptimal. Additionally, transparency had limited impacts on all aspects of performance. Implications for detection of hostile intentions and the challenges of supporting dynamic decision making are discussed.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"8 1","pages":"69"},"PeriodicalIF":4.1,"publicationDate":"2023-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10657914/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138048176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-06DOI: 10.1186/s41235-023-00522-w
Geoffrey L McKinley, Daniel J Peterson
When selecting fillers to include in a police lineup, one must consider the level of similarity between the suspect and potential fillers. In order to reduce misidentifications, an innocent suspect should not stand out. Therefore, it is important that the fillers share some degree of similarity. Importantly, increasing suspect-filler similarity too much will render the task too difficult reducing correct identifications of a guilty suspect. Determining how much similarity yields optimal identification performance is the focus of the proposed study. Extant research on lineup construction has provided somewhat mixed results. In part, this is likely due to the subjective nature of similarity, which forces researchers to define similarity in relative terms. In the current study, we manipulate suspect-filler similarity via a multidimensional scaling model constructed using objective facial measurements. In doing so, we test the "propitious heterogeneity" and the diagnostic-feature-detection hypotheses which predict an advantage of lineups with low-similarity fillers in terms of discriminability. We found that filler similarity did not affect discriminability. We discuss limitations and future directions.
{"title":"Using objective measures to examine the effect of suspect-filler similarity on eyewitness identification performance.","authors":"Geoffrey L McKinley, Daniel J Peterson","doi":"10.1186/s41235-023-00522-w","DOIUrl":"10.1186/s41235-023-00522-w","url":null,"abstract":"<p><p>When selecting fillers to include in a police lineup, one must consider the level of similarity between the suspect and potential fillers. In order to reduce misidentifications, an innocent suspect should not stand out. Therefore, it is important that the fillers share some degree of similarity. Importantly, increasing suspect-filler similarity too much will render the task too difficult reducing correct identifications of a guilty suspect. Determining how much similarity yields optimal identification performance is the focus of the proposed study. Extant research on lineup construction has provided somewhat mixed results. In part, this is likely due to the subjective nature of similarity, which forces researchers to define similarity in relative terms. In the current study, we manipulate suspect-filler similarity via a multidimensional scaling model constructed using objective facial measurements. In doing so, we test the \"propitious heterogeneity\" and the diagnostic-feature-detection hypotheses which predict an advantage of lineups with low-similarity fillers in terms of discriminability. We found that filler similarity did not affect discriminability. We discuss limitations and future directions.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"8 1","pages":"68"},"PeriodicalIF":4.1,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10628061/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71487307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-03DOI: 10.1186/s41235-023-00521-x
Maria Franca, Nadia Bolognini, Marc Brysbaert
People are able to perceive emotions in the eyes of others and can therefore see emotions when individuals wear face masks. Research has been hampered by the lack of a good test to measure basic emotions in the eyes. In two studies respectively with 358 and 200 participants, we developed a test to see anger, disgust, fear, happiness, sadness and surprise in images of eyes. Each emotion is measured with 8 stimuli (4 male actors and 4 female actors), matched in terms of difficulty and item discrimination. Participants reliably differed in their performance on the Seeing Emotions in the Eyes test (SEE-48). The test correlated well not only with Reading the Mind in the Eyes Test (RMET) but also with the Situational Test of Emotion Understanding (STEU), indicating that the SEE-48 not only measures low-level perceptual skills but also broader skills of emotion perception and emotional intelligence. The test is freely available for research and clinical purposes.
{"title":"Seeing emotions in the eyes: a validated test to study individual differences in the perception of basic emotions.","authors":"Maria Franca, Nadia Bolognini, Marc Brysbaert","doi":"10.1186/s41235-023-00521-x","DOIUrl":"10.1186/s41235-023-00521-x","url":null,"abstract":"<p><p>People are able to perceive emotions in the eyes of others and can therefore see emotions when individuals wear face masks. Research has been hampered by the lack of a good test to measure basic emotions in the eyes. In two studies respectively with 358 and 200 participants, we developed a test to see anger, disgust, fear, happiness, sadness and surprise in images of eyes. Each emotion is measured with 8 stimuli (4 male actors and 4 female actors), matched in terms of difficulty and item discrimination. Participants reliably differed in their performance on the Seeing Emotions in the Eyes test (SEE-48). The test correlated well not only with Reading the Mind in the Eyes Test (RMET) but also with the Situational Test of Emotion Understanding (STEU), indicating that the SEE-48 not only measures low-level perceptual skills but also broader skills of emotion perception and emotional intelligence. The test is freely available for research and clinical purposes.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"8 1","pages":"67"},"PeriodicalIF":3.4,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10622392/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71427832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}