Pub Date : 2024-11-21DOI: 10.1016/j.cognition.2024.106012
Alexander C. Walker , Jonathan A. Fugelsang , Derek J. Koehler
We examine the impact of partisan language (i.e., language that describes events in a manner that supports a political agenda), both with regard to peoples' perceptions of the speakers who use it and their evaluations of the events it is used to describe. In two experiments, we recruited 1121 Democrats and Republicans from the United States. Using a set of liberal-biased (e.g., expand voting rights) and conservative-biased (e.g., reduce election security) terms, we find that partisans judge speakers describing polarizing events using ideologically-congruent language as more trustworthy than those describing events in a non-partisan way (e.g., expand mail-in voting). However, when presented to rival partisans, ideologically-biased language promoted negative evaluations of opposing partisans, with speakers attributed out-group language being viewed as far less trustworthy than non-partisan speakers. Furthermore, presenting Democrats and Republicans with ideologically-congruent descriptions of political events polarized their attitudes towards the events described. Overall, the present investigation reveals how partisan language, while praised by co-partisans, can damage trust and amplify disagreement across political divides.
{"title":"Partisan language in a polarized world: In-group language provides reputational benefits to speakers while polarizing audiences","authors":"Alexander C. Walker , Jonathan A. Fugelsang , Derek J. Koehler","doi":"10.1016/j.cognition.2024.106012","DOIUrl":"10.1016/j.cognition.2024.106012","url":null,"abstract":"<div><div>We examine the impact of partisan language (i.e., language that describes events in a manner that supports a political agenda), both with regard to peoples' perceptions of the speakers who use it and their evaluations of the events it is used to describe. In two experiments, we recruited 1121 Democrats and Republicans from the United States. Using a set of liberal-biased (e.g., <em>expand voting rights</em>) and conservative-biased (e.g., <em>reduce election security</em>) terms, we find that partisans judge speakers describing polarizing events using ideologically-congruent language as more trustworthy than those describing events in a non-partisan way (e.g., <em>expand mail-in voting</em>). However, when presented to rival partisans, ideologically-biased language promoted negative evaluations of opposing partisans, with speakers attributed out-group language being viewed as far less trustworthy than non-partisan speakers. Furthermore, presenting Democrats and Republicans with ideologically-congruent descriptions of political events polarized their attitudes towards the events described. Overall, the present investigation reveals how partisan language, while praised by co-partisans, can damage trust and amplify disagreement across political divides.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 106012"},"PeriodicalIF":2.8,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142693659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-21DOI: 10.1016/j.cognition.2024.106009
Sara Spotorno , Benjamin W. Tatler
Understanding how early scene viewing is guided can reveal fundamental brain mechanisms for quickly making sense of our surroundings. Viewing is often initiated from the left side. Across two experiments, we focused on search initiation for lateralised targets within real-world scenes, investigating the role of the cerebral hemispheres in guiding the first saccade. We aimed to disentangle hemispheric contribution from the effects of reading habits and distinguish between an overall dominance of the right hemisphere for visuospatial processing and finer hemispheric specialisation for the type of target template representation (from pictorial versus verbal cues), spatial scale (global versus local), and timescale (short versus longer). We replicated the tendency to initiate search leftward in both experiments. However, we found no evidence supporting a significant impact of left-to-right reading habits, either as a purely motor or attentional bias to the left. A general visuospatial dominance of the right hemisphere could not account for the results either. In Experiment 1, we found a greater probability of directing the first saccade toward targets in the left visual field but only after a verbal target cue, with no lateral differences after a pictorial cue. This suggested a contribution of the right hemisphere specialisation in perceptually simulating words' referents. Lengthening the Inter-Stimulus Interval between the cue and the scene (from 100 to 900 ms) resulted in reduced first saccade gain in the left visual field, suggesting a decreased ability of the the right hemisphere to use the target template to guide gaze close to the target object, which primarily depends on local information processing. Experiment 2, using visual versus auditory verbal cues, replicated and extended the findings for both first saccade direction and gain. Overall, our study shows that the multidetermined functional specialisation of the cerebral hemispheres is a key driver of early scene search and must be incorporated into theories and models to advance understanding of the mechanisms that guide viewing behaviour.
{"title":"What's left of the leftward bias in scene viewing? Lateral asymmetries in information processing during early search guidance","authors":"Sara Spotorno , Benjamin W. Tatler","doi":"10.1016/j.cognition.2024.106009","DOIUrl":"10.1016/j.cognition.2024.106009","url":null,"abstract":"<div><div>Understanding how early scene viewing is guided can reveal fundamental brain mechanisms for quickly making sense of our surroundings. Viewing is often initiated from the left side. Across two experiments, we focused on search initiation for lateralised targets within real-world scenes, investigating the role of the cerebral hemispheres in guiding the first saccade. We aimed to disentangle hemispheric contribution from the effects of reading habits and distinguish between an overall dominance of the right hemisphere for visuospatial processing and finer hemispheric specialisation for the type of target template representation (from pictorial versus verbal cues), spatial scale (global versus local), and timescale (short versus longer). We replicated the tendency to initiate search leftward in both experiments. However, we found no evidence supporting a significant impact of left-to-right reading habits, either as a purely motor or attentional bias to the left. A general visuospatial dominance of the right hemisphere could not account for the results either. In Experiment 1, we found a greater probability of directing the first saccade toward targets in the left visual field but only after a verbal target cue, with no lateral differences after a pictorial cue. This suggested a contribution of the right hemisphere specialisation in perceptually simulating words' referents. Lengthening the Inter-Stimulus Interval between the cue and the scene (from 100 to 900 ms) resulted in reduced first saccade gain in the left visual field, suggesting a decreased ability of the the right hemisphere to use the target template to guide gaze close to the target object, which primarily depends on local information processing. Experiment 2, using visual versus auditory verbal cues, replicated and extended the findings for both first saccade direction and gain. Overall, our study shows that the multidetermined functional specialisation of the cerebral hemispheres is a key driver of early scene search and must be incorporated into theories and models to advance understanding of the mechanisms that guide viewing behaviour.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 106009"},"PeriodicalIF":2.8,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142693660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-20DOI: 10.1016/j.cognition.2024.106010
Daniel R. Lametti , Emma D. Wheeler , Samantha Palatinus , Imane Hocine , Douglas M. Shiller
Interactions between the context in which a sensorimotor skill is learned and the recall of that memory have been primarily studied in limb movements, but speech production requires movement, and many aspects of speech processing are influenced by task-relevant contextual information. Here, in ecologically valid speech (read sentences), we test whether English-French bilinguals can use the language of production to acquire and recall distinct motor plans for similar speech sounds spanning the production workspace. Participants experienced real-time alterations of auditory feedback while producing interleaved English and French sentences. The alterations were equal in magnitude but opposite in direction between languages. Over three experiments (n = 15 in each), we observed language-specific sensorimotor learning in speech that countered the alterations and persisted after the alterations were removed. The effects were not observed in a fourth experiment (n = 15) when the feedback alterations were tied to a non-linguistic cue. In a fifth experiment (n = 15), we provide further confirmation that the observed language-specific changes in speech production were confined to sentence production, the linguistic level at which they were learned. The results contrast with recent work and theories of second language learning that predict broad interference between L1 and L2 phonetic representations. When faced with contrasting sensorimotor demands between languages, bilinguals readily acquire and recall highly specific motor representations for speech.
{"title":"Language enables the acquisition of distinct sensorimotor memories for speech","authors":"Daniel R. Lametti , Emma D. Wheeler , Samantha Palatinus , Imane Hocine , Douglas M. Shiller","doi":"10.1016/j.cognition.2024.106010","DOIUrl":"10.1016/j.cognition.2024.106010","url":null,"abstract":"<div><div>Interactions between the context in which a sensorimotor skill is learned and the recall of that memory have been primarily studied in limb movements, but speech production requires movement, and many aspects of speech processing are influenced by task-relevant contextual information. Here, in ecologically valid speech (read sentences), we test whether English-French bilinguals can use the language of production to acquire and recall distinct motor plans for similar speech sounds spanning the production workspace. Participants experienced real-time alterations of auditory feedback while producing interleaved English and French sentences. The alterations were equal in magnitude but <em>opposite</em> in direction between languages. Over three experiments (<em>n</em> = 15 in each), we observed language-specific sensorimotor learning in speech that countered the alterations and persisted after the alterations were removed. The effects were not observed in a fourth experiment (<em>n</em> = 15) when the feedback alterations were tied to a non-linguistic cue. In a fifth experiment (n = 15), we provide further confirmation that the observed language-specific changes in speech production were confined to sentence production, the linguistic level at which they were learned. The results contrast with recent work and theories of second language learning that predict broad interference between L1 and L2 phonetic representations. When faced with contrasting sensorimotor demands between languages, bilinguals readily acquire and recall highly specific motor representations for speech.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 106010"},"PeriodicalIF":2.8,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142689289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machines powered by artificial intelligence have the potential to replace or collaborate with human decision-makers in moral settings. In these roles, machines would face moral tradeoffs, such as automated vehicles (AVs) distributing inevitable risks among road users. Do people believe that machines should make moral decisions differently from humans? If so, why? To address these questions, we conducted six studies (N = 6805) to examine how people, as observers, believe human drivers and AVs should act in similar moral dilemmas and how they judge their moral decisions. In pedestrian-only dilemmas where the two agents had to sacrifice one pedestrian to save more pedestrians, participants held them to similar utilitarian norms (Study 1). In occupant dilemmas where the agents needed to weigh the in-vehicle occupant against more pedestrians, participants were less accepting of AVs sacrificing their passenger compared to human drivers sacrificing themselves (Studies 1–3) or another passenger (Studies 5–6). The difference was not driven by reduced occupant agency in AVs (Study 4) or by non-voluntary occupant sacrifice in AVs (Study 5), but rather by the perceived social relationship between AVs and their users (Study 6). Thus, even when people adopt an impartial stance as observers, they are more likely to believe that AVs should prioritize serving their users in moral dilemmas. We discuss the theoretical and practical implications for AV morality.
{"title":"Morality on the road: Should machine drivers be more utilitarian than human drivers?","authors":"Peng Liu , Yueying Chu , Siming Zhai , Tingru Zhang , Edmond Awad","doi":"10.1016/j.cognition.2024.106011","DOIUrl":"10.1016/j.cognition.2024.106011","url":null,"abstract":"<div><div>Machines powered by artificial intelligence have the potential to replace or collaborate with human decision-makers in moral settings. In these roles, machines would face moral tradeoffs, such as automated vehicles (AVs) distributing inevitable risks among road users. Do people believe that machines should make moral decisions differently from humans? If so, why? To address these questions, we conducted six studies (<em>N</em> = 6805) to examine how people, as observers, believe human drivers and AVs should act in similar moral dilemmas and how they judge their moral decisions. In pedestrian-only dilemmas where the two agents had to sacrifice one pedestrian to save more pedestrians, participants held them to similar utilitarian norms (Study 1). In occupant dilemmas where the agents needed to weigh the in-vehicle occupant against more pedestrians, participants were less accepting of AVs sacrificing their passenger compared to human drivers sacrificing themselves (Studies 1–3) or another passenger (Studies 5–6). The difference was not driven by reduced occupant agency in AVs (Study 4) or by non-voluntary occupant sacrifice in AVs (Study 5), but rather by the perceived social relationship between AVs and their users (Study 6). Thus, even when people adopt an impartial stance as observers, they are more likely to believe that AVs should prioritize serving their users in moral dilemmas. We discuss the theoretical and practical implications for AV morality.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 106011"},"PeriodicalIF":2.8,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142676223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-18DOI: 10.1016/j.cognition.2024.106000
Carolin V. Hey, Marie Luisa Schaper, Ute J. Bayen
The Continued Influence Effect (CIE) is the phenomenon that retracted information often continues to influence judgments and inferences. The CIE is rational when the source that retracts the information (the retractor) is less credible than the source that originally presented the information (the informant; Connor Desai et al., 2020). Conversely, a CIE is not rational when the retractor is at least as credible as the informant. Thus, a rational account predicts that the CIE depends on the relative credibility of informant and retractor. In two experiments (N = 151, N = 146), informant credibility and retractor credibility were independently manipulated. Participants read a fictitious news report in which original information and a retraction were each presented by either a source with high credibility or a source with low credibility. In both experiments, when the informant was more credible than the retractor, participants showed a CIE compared to control participants who saw neither the information nor the retraction (ds > 0.82). When the informant was less credible than the retractor, participants showed no CIE, in line with a rational account. However, in Experiment 2, participants also showed a CIE when informant and retractor were equally credible (ds > 0.51). This cannot be explained by a rational account, but is consistent with error-based accounts of the CIE. Thus, a rational account alone cannot fully account for the complete pattern of results, but needs to be complemented with accounts that view the CIE as a memory-based error.
{"title":"Relative source credibility affects the continued influence effect: Evidence of rationality in the CIE","authors":"Carolin V. Hey, Marie Luisa Schaper, Ute J. Bayen","doi":"10.1016/j.cognition.2024.106000","DOIUrl":"10.1016/j.cognition.2024.106000","url":null,"abstract":"<div><div>The <em>Continued Influence Effect</em> (CIE) is the phenomenon that retracted information often continues to influence judgments and inferences. The CIE is rational when the source that retracts the information (the <em>retractor</em>) is less credible than the source that originally presented the information (the <em>informant</em>; Connor Desai et al., 2020). Conversely, a CIE is not rational when the retractor is at least as credible as the informant. Thus, a rational account predicts that the CIE depends on the relative credibility of informant and retractor. In two experiments (<em>N</em> = 151, <em>N</em> = 146), informant credibility and retractor credibility were independently manipulated. Participants read a fictitious news report in which original information and a retraction were each presented by either a source with high credibility or a source with low credibility. In both experiments, when the informant was more credible than the retractor, participants showed a CIE compared to control participants who saw neither the information nor the retraction (<em>d</em>s > 0.82). When the informant was less credible than the retractor, participants showed no CIE, in line with a rational account. However, in Experiment 2, participants also showed a CIE when informant and retractor were equally credible (<em>d</em>s > 0.51). This cannot be explained by a rational account, but is consistent with error-based accounts of the CIE. Thus, a rational account alone cannot fully account for the complete pattern of results, but needs to be complemented with accounts that view the CIE as a memory-based error.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 106000"},"PeriodicalIF":2.8,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142676486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-16DOI: 10.1016/j.cognition.2024.106008
Xue Tian , Yiying Song , Jia Liu
Face recognition is crucial for social interactions. Traditional approaches primarily rely on subjective judgment, utilizing a pre-selected set of facial features based on literature or intuition to identify critical facial features for face recognition. In this study, we adopted a reverse-correlation approach, aligning responses of a deep convolutional neural network (DCNN) with its internal representations to objectively identify facial features pivotal for face recognition. Specifically, we trained a DCNN, namely VGG-FD, to possess human-like capability in discriminating facial identities. A representational similarity analysis (RSA) was employed to characterize VGG-FD's performance metrics, which was subsequently reverse-correlated with its representations in layers capable of discriminating facial identities. Our analysis revealed a higher likelihood of face pairs being perceived as different identities when their representations significantly differed in areas such as the eyes, eyebrows, or central facial region, suggesting the significance of the eyes as facial parts and the central facial region as an integral of face configuration in face recognition. In summary, our study leveraged DCNNs to identify critical facial features for face discrimination in a hypothesis-neutral, data-driven manner, hereby advocating for the adoption of this new paradigm to explore critical facial features across various face recognition tasks.
{"title":"Decoding face identity: A reverse-correlation approach using deep learning","authors":"Xue Tian , Yiying Song , Jia Liu","doi":"10.1016/j.cognition.2024.106008","DOIUrl":"10.1016/j.cognition.2024.106008","url":null,"abstract":"<div><div>Face recognition is crucial for social interactions. Traditional approaches primarily rely on subjective judgment, utilizing a pre-selected set of facial features based on literature or intuition to identify critical facial features for face recognition. In this study, we adopted a reverse-correlation approach, aligning responses of a deep convolutional neural network (DCNN) with its internal representations to objectively identify facial features pivotal for face recognition. Specifically, we trained a DCNN, namely VGG-FD, to possess human-like capability in discriminating facial identities. A representational similarity analysis (RSA) was employed to characterize VGG-FD's performance metrics, which was subsequently reverse-correlated with its representations in layers capable of discriminating facial identities. Our analysis revealed a higher likelihood of face pairs being perceived as different identities when their representations significantly differed in areas such as the eyes, eyebrows, or central facial region, suggesting the significance of the eyes as facial parts and the central facial region as an integral of face configuration in face recognition. In summary, our study leveraged DCNNs to identify critical facial features for face discrimination in a hypothesis-neutral, data-driven manner, hereby advocating for the adoption of this new paradigm to explore critical facial features across various face recognition tasks.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 106008"},"PeriodicalIF":2.8,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142649374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-15DOI: 10.1016/j.cognition.2024.106002
Léa Entzmann , Árni Gunnar Ásgeirsson , Árni Kristjánsson
While the visual world is rich and complex, importantly, it nevertheless contains many statistical regularities. For example, environmental feature distributions tend to remain relatively stable from one moment to the next. Recent findings have shown how observers can learn surprising details of environmental color distributions, even when the colors belong to actively ignored stimuli such as distractors in visual search. Our aim was to determine whether such learning influences orienting in the visual environment, measured with saccadic eye movements. In two visual search experiments, observers had to find an odd-one-out target. Firstly, we tested cases where observers selected targets by fixating them. Secondly, we measured saccadic eye movements when observers made judgments on the target and responded manually. Trials were structured in blocks, containing learning trials where distractors came from the same color distribution (uniform or Gaussian) while on subsequent test trials, the target was at different distances from the mean of the learning distractor distribution. For both manual and saccadic measures, performance improved throughout the learning trials and was better when the distractor colors came from a Gaussian distribution. Moreover, saccade latencies during test trials depended on the distance between the color of the current target and the distractors on learning trials, replicating results obtained with manual responses. Latencies were slowed when the target color was within the learning distractor color distribution and also revealed that observers learned the difference between uniform and Gaussian distributions. The importance of several variables in predicting saccadic and manual reaction times was studied using random forests, revealing similar rankings for both modalities, although previous distractor color had a higher impact on free eye movements. Overall, our results demonstrate learning of detailed characteristics of environmental color distributions that affects early attentional selection rather than later decisional processes.
{"title":"How does color distribution learning affect goal-directed visuomotor behavior?","authors":"Léa Entzmann , Árni Gunnar Ásgeirsson , Árni Kristjánsson","doi":"10.1016/j.cognition.2024.106002","DOIUrl":"10.1016/j.cognition.2024.106002","url":null,"abstract":"<div><div>While the visual world is rich and complex, importantly, it nevertheless contains many statistical regularities. For example, environmental feature distributions tend to remain relatively stable from one moment to the next. Recent findings have shown how observers can learn surprising details of environmental color distributions, even when the colors belong to actively ignored stimuli such as distractors in visual search. Our aim was to determine whether such learning influences orienting in the visual environment, measured with saccadic eye movements. In two visual search experiments, observers had to find an odd-one-out target. Firstly, we tested cases where observers selected targets by fixating them. Secondly, we measured saccadic eye movements when observers made judgments on the target and responded manually. Trials were structured in blocks, containing <em>learning trials</em> where distractors came from the same color distribution (uniform or Gaussian) while on subsequent <em>test trials</em>, the target was at different distances from the mean of the learning distractor distribution. For both manual and saccadic measures, performance improved throughout the learning trials and was better when the distractor colors came from a Gaussian distribution. Moreover, saccade latencies during test trials depended on the distance between the color of the current target and the distractors on learning trials, replicating results obtained with manual responses. Latencies were slowed when the target color was within the learning distractor color distribution and also revealed that observers learned the difference between uniform and Gaussian distributions. The importance of several variables in predicting saccadic and manual reaction times was studied using random forests, revealing similar rankings for both modalities, although previous distractor color had a higher impact on free eye movements. Overall, our results demonstrate learning of detailed characteristics of environmental color distributions that affects early attentional selection rather than later decisional processes.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 106002"},"PeriodicalIF":2.8,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142639892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-14DOI: 10.1016/j.cognition.2024.106007
Xiaojin Ma , Richard A. Abrams
Recent findings suggest that it is possible for people to proactively avoid attentional capture by salient distractors during visual search. The results have important implications for understanding the competing influences of top-down and bottom-up factors in visual attention. Nevertheless, questions remain regarding the extent to which apparently ignored distractors are processed. To assess distractor processing, previous experiments have used a probe method in which stimuli are occasionally superimposed on the search display–requiring participants to abort the search and identify the probe stimuli. It has been recently shown that such probe tasks may be vulnerable to decision-level biases, such as a participant's willingness to report stimuli on to-be-ignored items. We report here results from a new method that is not subject to this limitation. In the new method, the non-target search elements, including the salient distractors, contained features that were either congruent or incongruent with the target. Processing of the non-target elements is inferred from the effects of the compatibility of the shared features on judgments about the target. In four experiments using the technique we show that ignored salient distractors are indeed processed less fully than non-target elements that are not salient, replicating the results of earlier studies using the probe methods. Additionally, the processing of the distractors was found to be reduced at least in part at early perceptual or attentional stages, as assumed by models of attentional suppression. The study confirms the proactive avoidance of capture by salient distractors measured without decision-level biases and provides a new technique for assessing the magnitude of distractor processing.
{"title":"Bias-free measure of distractor avoidance in visual search","authors":"Xiaojin Ma , Richard A. Abrams","doi":"10.1016/j.cognition.2024.106007","DOIUrl":"10.1016/j.cognition.2024.106007","url":null,"abstract":"<div><div>Recent findings suggest that it is possible for people to proactively avoid attentional capture by salient distractors during visual search. The results have important implications for understanding the competing influences of top-down and bottom-up factors in visual attention. Nevertheless, questions remain regarding the extent to which apparently ignored distractors are processed. To assess distractor processing, previous experiments have used a probe method in which stimuli are occasionally superimposed on the search display–requiring participants to abort the search and identify the probe stimuli. It has been recently shown that such probe tasks may be vulnerable to decision-level biases, such as a participant's willingness to report stimuli on to-be-ignored items. We report here results from a new method that is not subject to this limitation. In the new method, the non-target search elements, including the salient distractors, contained features that were either congruent or incongruent with the target. Processing of the non-target elements is inferred from the effects of the compatibility of the shared features on judgments about the target. In four experiments using the technique we show that ignored salient distractors are indeed processed less fully than non-target elements that are not salient, replicating the results of earlier studies using the probe methods. Additionally, the processing of the distractors was found to be reduced at least in part at early perceptual or attentional stages, as assumed by models of attentional suppression. The study confirms the proactive avoidance of capture by salient distractors measured without decision-level biases and provides a new technique for assessing the magnitude of distractor processing.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 106007"},"PeriodicalIF":2.8,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142630757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-13DOI: 10.1016/j.cognition.2024.105984
David Moss , Andres Montealegre , Lance S. Bush , Lucius Caviola , David Pizarro
Prior work has established that laypeople do not consistently treat moral questions as being objectively true or as merely true relative to different perspectives. Rather, these metaethical judgments vary dramatically across moral issues and in response to different social influences. We offer a potential explanation by examining how objectivists and relativists are evaluated in different contexts. We provide evidence for a novel account of metaethical judgments as signaling tolerance or intolerance of disagreement. The social implications of signaling tolerance or intolerance in different contexts may motivate different metaethical judgments. Study 1 finds that relativists are perceived as more tolerant, empathic, having superior moral character, and as more desirable as social partners than objectivists. Study 2 replicates these findings with a within-participants design and also shows that objectivists are perceived as more morally serious than relativists. Study 3 examines evaluations of objectivists and relativists regarding concrete moral issues, finding these results vary across situations of moral agreement and disagreement. Study 4 finds that participants' metaethical stances likewise vary when responding in the way they think would make a person who agrees or disagrees with them evaluate them more positively. However, in Study 5, we find no effect on metaethical judgment of telling participants they will be evaluated by a person who agrees or disagrees with them, which suggests either a failure to induce reputational concerns or a more limited influence of reputational considerations on metaethical judgments, despite strong effects on social evaluation.
{"title":"Signaling (in)tolerance: Social evaluation and metaethical relativism and objectivism","authors":"David Moss , Andres Montealegre , Lance S. Bush , Lucius Caviola , David Pizarro","doi":"10.1016/j.cognition.2024.105984","DOIUrl":"10.1016/j.cognition.2024.105984","url":null,"abstract":"<div><div>Prior work has established that laypeople do not consistently treat moral questions as being objectively true or as merely true relative to different perspectives. Rather, these metaethical judgments vary dramatically across moral issues and in response to different social influences. We offer a potential explanation by examining how objectivists and relativists are evaluated in different contexts. We provide evidence for a novel account of metaethical judgments as signaling tolerance or intolerance of disagreement. The social implications of signaling tolerance or intolerance in different contexts may motivate different metaethical judgments. Study 1 finds that relativists are perceived as more tolerant, empathic, having superior moral character, and as more desirable as social partners than objectivists. Study 2 replicates these findings with a within-participants design and also shows that objectivists are perceived as more morally serious than relativists. Study 3 examines evaluations of objectivists and relativists regarding concrete moral issues, finding these results vary across situations of moral agreement and disagreement. Study 4 finds that participants' metaethical stances likewise vary when responding in the way they think would make a person who agrees or disagrees with them evaluate them more positively. However, in Study 5, we find no effect on metaethical judgment of telling participants they will be evaluated by a person who agrees or disagrees with them, which suggests either a failure to induce reputational concerns or a more limited influence of reputational considerations on metaethical judgments, despite strong effects on social evaluation.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 105984"},"PeriodicalIF":2.8,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142630765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-12DOI: 10.1016/j.cognition.2024.106003
Deena Skolnick Weisberg
Wilson et al. (2025) report a failed attempt to replicate the reductive allure effect: Unlike prior work, they do not find that participants judged explanations of scientific phenomena to be higher quality when they contained irrelevant reductive language. The current commentary considers three possible reasons for this failure to replicate: (1) a change in the nature of online study participants, (2) a change in the background knowledge that people bring to judgments of scientific explanations, and (3) a change in the kinds of explanations that people find satisfying.
{"title":"Possible reasons for reductive seductions: A reply to Wilson et al.","authors":"Deena Skolnick Weisberg","doi":"10.1016/j.cognition.2024.106003","DOIUrl":"10.1016/j.cognition.2024.106003","url":null,"abstract":"<div><div>Wilson et al. (2025) report a failed attempt to replicate the reductive allure effect: Unlike prior work, they do not find that participants judged explanations of scientific phenomena to be higher quality when they contained irrelevant reductive language. The current commentary considers three possible reasons for this failure to replicate: (1) a change in the nature of online study participants, (2) a change in the background knowledge that people bring to judgments of scientific explanations, and (3) a change in the kinds of explanations that people find satisfying.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"254 ","pages":"Article 106003"},"PeriodicalIF":2.8,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142630763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}