Pub Date : 2024-08-01Epub Date: 2024-01-24DOI: 10.3758/s13423-023-02435-7
Nicola Di Stefano, Alessandro Ansani, Andrea Schiavio, Charles Spence
Over recent decades, studies investigating cross-modal correspondences have documented the existence of a wide range of consistent cross-modal associations between simple auditory and visual stimuli or dimensions (e.g., pitch-lightness). Far fewer studies have investigated the association between complex and realistic auditory stimuli and visually presented concepts (e.g., musical excerpts-animals). Surprisingly, however, there is little evidence concerning the extent to which these associations are shared across cultures. To address this gap in the literature, two experiments using a set of stimuli based on Prokofiev's symphonic fairy tale Peter and the Wolf are reported. In Experiment 1, 293 participants from several countries and with very different language backgrounds rated the association between the musical excerpts, images and words representing the story's characters (namely, bird, duck, wolf, cat, and grandfather). The results revealed that participants tended to consistently associate the wolf and the bird with the corresponding musical excerpt, while the stimuli of other characters were not consistently matched across participants. Remarkably, neither the participants' cultural background, nor their musical expertise affected the ratings. In Experiment 2, 104 participants were invited to rate each stimulus on eight emotional features. The results revealed that the emotional profiles associated with the music and with the concept of the wolf and the bird were perceived as more consistent between observers than the emotional profiles associated with the music and the concept of the duck, the cat, and the grandpa. Taken together, these findings therefore suggest that certain auditory-conceptual associations are perceived consistently across cultures and may be mediated by emotional associations.
{"title":"Prokofiev was (almost) right: A cross-cultural investigation of auditory-conceptual associations in Peter and the Wolf.","authors":"Nicola Di Stefano, Alessandro Ansani, Andrea Schiavio, Charles Spence","doi":"10.3758/s13423-023-02435-7","DOIUrl":"10.3758/s13423-023-02435-7","url":null,"abstract":"<p><p>Over recent decades, studies investigating cross-modal correspondences have documented the existence of a wide range of consistent cross-modal associations between simple auditory and visual stimuli or dimensions (e.g., pitch-lightness). Far fewer studies have investigated the association between complex and realistic auditory stimuli and visually presented concepts (e.g., musical excerpts-animals). Surprisingly, however, there is little evidence concerning the extent to which these associations are shared across cultures. To address this gap in the literature, two experiments using a set of stimuli based on Prokofiev's symphonic fairy tale Peter and the Wolf are reported. In Experiment 1, 293 participants from several countries and with very different language backgrounds rated the association between the musical excerpts, images and words representing the story's characters (namely, bird, duck, wolf, cat, and grandfather). The results revealed that participants tended to consistently associate the wolf and the bird with the corresponding musical excerpt, while the stimuli of other characters were not consistently matched across participants. Remarkably, neither the participants' cultural background, nor their musical expertise affected the ratings. In Experiment 2, 104 participants were invited to rate each stimulus on eight emotional features. The results revealed that the emotional profiles associated with the music and with the concept of the wolf and the bird were perceived as more consistent between observers than the emotional profiles associated with the music and the concept of the duck, the cat, and the grandpa. Taken together, these findings therefore suggest that certain auditory-conceptual associations are perceived consistently across cultures and may be mediated by emotional associations.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11358347/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139547038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-01-03DOI: 10.3758/s13423-023-02441-9
Kyuto Uno, Souta Hidaka
The brain integrates multisensory information to construct coherent perceptual representations based on spatial and temporal congruence. Intriguingly, multisensory timing perception can be flexibly calibrated. Repeated exposure to audiovisual asynchrony induces shifts in subjective simultaneity (temporal recalibration). Spatial congruence is known to serve as a grouping cue for recalibration when the audiovisual temporal relationship is ambiguous during exposure. A single exposure to audiovisual asynchrony can also trigger temporal recalibration (rapid recalibration). However, it has been suggested that the underlying mechanisms of these temporal recalibrations differ. Here, we examined whether spatial congruence can be a grouping cue for rapid recalibration when audiovisual pairs are not defined by temporal relationships. Participants made a simultaneity judgment for a pair of audiovisual stimuli after adapting three consecutive stimuli once in a "light-sound-light" or "sound-light-sound" order with an equal temporal interval. The spatial positions of the adapting stimuli were manipulated as an audiovisual pair from the same position (e.g., left) and the remaining stimulus from another position (e.g., right). In three experiments, the spatial congruence of the audiovisual adapting stimuli did not show a modulatory effect, while we replicated the rapid recalibration effects. Rather, rapid recalibration occurred according to the temporal order of the first light and sound. Our findings suggest that, in contrast to temporal recalibration with repeated exposure, the perceptual systems underlying rapid recalibration simply combine individual visual and auditory inputs based on the order in which they arrive.
{"title":"No effect of spatial congruence on rapid temporal recalibration to audiovisual asynchrony.","authors":"Kyuto Uno, Souta Hidaka","doi":"10.3758/s13423-023-02441-9","DOIUrl":"10.3758/s13423-023-02441-9","url":null,"abstract":"<p><p>The brain integrates multisensory information to construct coherent perceptual representations based on spatial and temporal congruence. Intriguingly, multisensory timing perception can be flexibly calibrated. Repeated exposure to audiovisual asynchrony induces shifts in subjective simultaneity (temporal recalibration). Spatial congruence is known to serve as a grouping cue for recalibration when the audiovisual temporal relationship is ambiguous during exposure. A single exposure to audiovisual asynchrony can also trigger temporal recalibration (rapid recalibration). However, it has been suggested that the underlying mechanisms of these temporal recalibrations differ. Here, we examined whether spatial congruence can be a grouping cue for rapid recalibration when audiovisual pairs are not defined by temporal relationships. Participants made a simultaneity judgment for a pair of audiovisual stimuli after adapting three consecutive stimuli once in a \"light-sound-light\" or \"sound-light-sound\" order with an equal temporal interval. The spatial positions of the adapting stimuli were manipulated as an audiovisual pair from the same position (e.g., left) and the remaining stimulus from another position (e.g., right). In three experiments, the spatial congruence of the audiovisual adapting stimuli did not show a modulatory effect, while we replicated the rapid recalibration effects. Rather, rapid recalibration occurred according to the temporal order of the first light and sound. Our findings suggest that, in contrast to temporal recalibration with repeated exposure, the perceptual systems underlying rapid recalibration simply combine individual visual and auditory inputs based on the order in which they arrive.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139088142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-12-26DOI: 10.3758/s13423-023-02421-z
Dominik R Bach
Psychometrics is historically grounded in the study of individual differences. Consequently, common metrics such as quantitative validity and reliability require between-person variance in a psychological variable to be meaningful. Experimental psychology, in contrast, deals with variance between treatments, and experiments often strive to minimise within-group person variance. In this article, I ask whether and how psychometric evaluation can be performed in experimental psychology. A commonly used strategy is to harness between-person variance in the treatment effect. Using simulated data, I show that this approach can be misleading when between-person variance is low, and in the face of methods variance. I argue that this situation is common in experimental psychology, because low between-person variance is desirable, and because methods variance is no more problematic in experimental settings than any other source of between-person variance. By relating validity and reliability with the corresponding concepts in measurement science outside psychology, I show how experiment-based calibration can serve to compare the psychometric quality of different measurement methods in experimental psychology.
{"title":"Psychometrics in experimental psychology: A case for calibration.","authors":"Dominik R Bach","doi":"10.3758/s13423-023-02421-z","DOIUrl":"10.3758/s13423-023-02421-z","url":null,"abstract":"<p><p>Psychometrics is historically grounded in the study of individual differences. Consequently, common metrics such as quantitative validity and reliability require between-person variance in a psychological variable to be meaningful. Experimental psychology, in contrast, deals with variance between treatments, and experiments often strive to minimise within-group person variance. In this article, I ask whether and how psychometric evaluation can be performed in experimental psychology. A commonly used strategy is to harness between-person variance in the treatment effect. Using simulated data, I show that this approach can be misleading when between-person variance is low, and in the face of methods variance. I argue that this situation is common in experimental psychology, because low between-person variance is desirable, and because methods variance is no more problematic in experimental settings than any other source of between-person variance. By relating validity and reliability with the corresponding concepts in measurement science outside psychology, I show how experiment-based calibration can serve to compare the psychometric quality of different measurement methods in experimental psychology.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11358352/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139040449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-01-18DOI: 10.3758/s13423-023-02401-3
Francesco Ceccarini, Ilaria Colpizzi, Corrado Caudek
The perception of threatening facial expressions is a critical skill necessary for detecting the emotional states of others and responding appropriately. The anger superiority effect hypothesis suggests that individuals are better at processing and identifying angry faces compared with other nonthreatening facial expressions. In adults, the anger superiority effect is present even after controlling for the bottom-up visual saliency, and when ecologically valid stimuli are used. However, it is as yet unclear whether this effect is present in children. To fill this gap, we tested the anger superiority effect in children ages 6-14 years in a visual search task by using emotional dynamic stimuli and equating the visual salience of target and distractors. The results suggest that in childhood, the angry superiority effect consists of improved accuracy in detecting angry faces, while in adolescence, the ability to discriminate angry faces undergoes further development, enabling faster and more accurate threat detection.
{"title":"Age-dependent changes in the anger superiority effect: Evidence from a visual search task.","authors":"Francesco Ceccarini, Ilaria Colpizzi, Corrado Caudek","doi":"10.3758/s13423-023-02401-3","DOIUrl":"10.3758/s13423-023-02401-3","url":null,"abstract":"<p><p>The perception of threatening facial expressions is a critical skill necessary for detecting the emotional states of others and responding appropriately. The anger superiority effect hypothesis suggests that individuals are better at processing and identifying angry faces compared with other nonthreatening facial expressions. In adults, the anger superiority effect is present even after controlling for the bottom-up visual saliency, and when ecologically valid stimuli are used. However, it is as yet unclear whether this effect is present in children. To fill this gap, we tested the anger superiority effect in children ages 6-14 years in a visual search task by using emotional dynamic stimuli and equating the visual salience of target and distractors. The results suggest that in childhood, the angry superiority effect consists of improved accuracy in detecting angry faces, while in adolescence, the ability to discriminate angry faces undergoes further development, enabling faster and more accurate threat detection.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11358229/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139491941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-01-29DOI: 10.3758/s13423-024-02459-7
Lisa P Y Lin, Sally A Linkenauger
Optic flow provides information on movement direction and speed during locomotion. Changing the relationship between optic flow and walking speed via training has been shown to influence subsequent distance and hill steepness estimations. Previous research has shown that experience with slow optic flow at a given walking speed was associated with increased effort and distance overestimation in comparison to experiencing with fast optic flow at the same walking speed. Here, we investigated whether exposure to different optic flow speeds relative to gait influences perceptions of leaping and jumping ability. Participants estimated their maximum leaping and jumping ability after exposure to either fast or moderate optic flow at the same walking speed. Those calibrated to fast optic flow estimated farther leaping and jumping abilities than those calibrated to moderate optic flow. Findings suggest that recalibration between optic flow and walking speed may specify an action boundary when calibrated or scaled to actions such as leaping, and possibly, the manipulation of optic flow speed has resulted in a change in the associated anticipated effort for walking a prescribed distance, which in turn influence one's perceived action capabilities for jumping and leaping.
{"title":"Jumping and leaping estimations using optic flow.","authors":"Lisa P Y Lin, Sally A Linkenauger","doi":"10.3758/s13423-024-02459-7","DOIUrl":"10.3758/s13423-024-02459-7","url":null,"abstract":"<p><p>Optic flow provides information on movement direction and speed during locomotion. Changing the relationship between optic flow and walking speed via training has been shown to influence subsequent distance and hill steepness estimations. Previous research has shown that experience with slow optic flow at a given walking speed was associated with increased effort and distance overestimation in comparison to experiencing with fast optic flow at the same walking speed. Here, we investigated whether exposure to different optic flow speeds relative to gait influences perceptions of leaping and jumping ability. Participants estimated their maximum leaping and jumping ability after exposure to either fast or moderate optic flow at the same walking speed. Those calibrated to fast optic flow estimated farther leaping and jumping abilities than those calibrated to moderate optic flow. Findings suggest that recalibration between optic flow and walking speed may specify an action boundary when calibrated or scaled to actions such as leaping, and possibly, the manipulation of optic flow speed has resulted in a change in the associated anticipated effort for walking a prescribed distance, which in turn influence one's perceived action capabilities for jumping and leaping.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11358219/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139576401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-01-12DOI: 10.3758/s13423-023-02440-w
Evgeniia Diachek, Sarah Brown-Schmidt
Empirical studies of conversational recall show that the amount of conversation that can be recalled after a delay is limited and biased in favor of one's own contributions. What aspects of a conversational interaction shape what will and will not be recalled? This study aims to predict the contents of conversation that will be recalled based on linguistic features of what was said. Across 59 conversational dyads, we observed that two linguistic features that are hallmarks of interactive language use-disfluency (um/uh) and backchannelling (ok, yeah)-promoted recall. Two other features-disagreements between the interlocutors and use of "like"-were not predictive of recall. While self-generated material was better remembered overall, both hearing and producing disfluency and backchannels improved memory for the associated utterances. Finally, the disfluency-related memory boost was similar regardless of the number of disfluencies in the utterance. Overall, we conclude that interactional linguistic features of conversation are predictive of what is and is not recalled following conversation.
{"title":"Linguistic features of spontaneous speech predict conversational recall.","authors":"Evgeniia Diachek, Sarah Brown-Schmidt","doi":"10.3758/s13423-023-02440-w","DOIUrl":"10.3758/s13423-023-02440-w","url":null,"abstract":"<p><p>Empirical studies of conversational recall show that the amount of conversation that can be recalled after a delay is limited and biased in favor of one's own contributions. What aspects of a conversational interaction shape what will and will not be recalled? This study aims to predict the contents of conversation that will be recalled based on linguistic features of what was said. Across 59 conversational dyads, we observed that two linguistic features that are hallmarks of interactive language use-disfluency (um/uh) and backchannelling (ok, yeah)-promoted recall. Two other features-disagreements between the interlocutors and use of \"like\"-were not predictive of recall. While self-generated material was better remembered overall, both hearing and producing disfluency and backchannels improved memory for the associated utterances. Finally, the disfluency-related memory boost was similar regardless of the number of disfluencies in the utterance. Overall, we conclude that interactional linguistic features of conversation are predictive of what is and is not recalled following conversation.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139432896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-01-16DOI: 10.3758/s13423-023-02446-4
Min Zhu, Fei Chen, Chenxin Shi, Yang Zhang
The spontaneous speech-to-speech synchronization (SSS) test has been shown to be an effective behavioral method to estimate cortical speech auditory-motor coupling strength through phase-locking value (PLV) between auditory input and motor output. This study further investigated how amplitude envelope onset variations of the auditory speech signal may influence the speech auditory-motor synchronization. Sixty Mandarin-speaking adults listened to a stream of randomly presented syllables at an increasing speed while concurrently whispering in synchrony with the rhythm of the auditory stimuli whose onset consistency was manipulated, consisting of aspirated, unaspirated, and mixed conditions. The participants' PLVs for the three conditions in the SSS test were derived and compared. Results showed that syllable rise time affected the speech auditory-motor synchronization in a bifurcated fashion. Specifically, PLVs were significantly higher in the temporally more consistent conditions (aspirated or unaspirated) than those in the less consistent condition (mixed) for high synchronizers. In contrast, low synchronizers tended to be immune to the onset consistency. Overall, these results validated how syllable onset consistency in the rise time of amplitude envelope may modulate the strength of speech auditory-motor coupling. This study supports the application of the SSS test to examine individual differences in the integration of perception and production systems, which has implications for those with speech and language disorders that have difficulty with processing speech onset characteristics such as rise time.
{"title":"Amplitude envelope onset characteristics modulate phase locking for speech auditory-motor synchronization.","authors":"Min Zhu, Fei Chen, Chenxin Shi, Yang Zhang","doi":"10.3758/s13423-023-02446-4","DOIUrl":"10.3758/s13423-023-02446-4","url":null,"abstract":"<p><p>The spontaneous speech-to-speech synchronization (SSS) test has been shown to be an effective behavioral method to estimate cortical speech auditory-motor coupling strength through phase-locking value (PLV) between auditory input and motor output. This study further investigated how amplitude envelope onset variations of the auditory speech signal may influence the speech auditory-motor synchronization. Sixty Mandarin-speaking adults listened to a stream of randomly presented syllables at an increasing speed while concurrently whispering in synchrony with the rhythm of the auditory stimuli whose onset consistency was manipulated, consisting of aspirated, unaspirated, and mixed conditions. The participants' PLVs for the three conditions in the SSS test were derived and compared. Results showed that syllable rise time affected the speech auditory-motor synchronization in a bifurcated fashion. Specifically, PLVs were significantly higher in the temporally more consistent conditions (aspirated or unaspirated) than those in the less consistent condition (mixed) for high synchronizers. In contrast, low synchronizers tended to be immune to the onset consistency. Overall, these results validated how syllable onset consistency in the rise time of amplitude envelope may modulate the strength of speech auditory-motor coupling. This study supports the application of the SSS test to examine individual differences in the integration of perception and production systems, which has implications for those with speech and language disorders that have difficulty with processing speech onset characteristics such as rise time.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139472181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-01-25DOI: 10.3758/s13423-024-02454-y
Yujia Peng, Joseph M Burling, Greta K Todorova, Catherine Neary, Frank E Pollick, Hongjing Lu
When viewing the actions of others, we not only see patterns of body movements, but we also "see" the intentions and social relations of people. Experienced forensic examiners - Closed Circuit Television (CCTV) operators - have been shown to convey superior performance in identifying and predicting hostile intentions from surveillance footage than novices. However, it remains largely unknown what visual content CCTV operators actively attend to, and whether CCTV operators develop different strategies for active information seeking from what novices do. Here, we conducted computational analysis for the gaze-centered stimuli captured by experienced CCTV operators and novices' eye movements when viewing the same surveillance footage. Low-level image features were extracted by a visual saliency model, whereas object-level semantic features were extracted by a deep convolutional neural network (DCNN), AlexNet, from gaze-centered regions. We found that the looking behavior of CCTV operators differs from novices by actively attending to visual contents with different patterns of saliency and semantic features. Expertise in selectively utilizing informative features at different levels of visual hierarchy may play an important role in facilitating the efficient detection of social relationships between agents and the prediction of harmful intentions.
{"title":"Patterns of saliency and semantic features distinguish gaze of expert and novice viewers of surveillance footage.","authors":"Yujia Peng, Joseph M Burling, Greta K Todorova, Catherine Neary, Frank E Pollick, Hongjing Lu","doi":"10.3758/s13423-024-02454-y","DOIUrl":"10.3758/s13423-024-02454-y","url":null,"abstract":"<p><p>When viewing the actions of others, we not only see patterns of body movements, but we also \"see\" the intentions and social relations of people. Experienced forensic examiners - Closed Circuit Television (CCTV) operators - have been shown to convey superior performance in identifying and predicting hostile intentions from surveillance footage than novices. However, it remains largely unknown what visual content CCTV operators actively attend to, and whether CCTV operators develop different strategies for active information seeking from what novices do. Here, we conducted computational analysis for the gaze-centered stimuli captured by experienced CCTV operators and novices' eye movements when viewing the same surveillance footage. Low-level image features were extracted by a visual saliency model, whereas object-level semantic features were extracted by a deep convolutional neural network (DCNN), AlexNet, from gaze-centered regions. We found that the looking behavior of CCTV operators differs from novices by actively attending to visual contents with different patterns of saliency and semantic features. Expertise in selectively utilizing informative features at different levels of visual hierarchy may play an important role in facilitating the efficient detection of social relationships between agents and the prediction of harmful intentions.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11358171/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139564733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-01-04DOI: 10.3758/s13423-023-02433-9
Alexander H Bower, Nicole Han, Ansh Soni, Miguel P Eckstein, Mark Steyvers
How accurate are people in judging someone else's knowledge based on their language use, and do more knowledgeable people use different cues to make these judgments? We address this by recruiting a group of participants ("informants") to answer general knowledge questions and describe various images belonging to different categories (e.g., cartoons, basketball). A second group of participants ("evaluators") also answer general knowledge questions and decide who is more knowledgeable within pairs of informants, based on these descriptions. Evaluators perform above chance at identifying the most knowledgeable informants (65% with only one description available). The less knowledgeable evaluators base their decisions on the number of specific statements, regardless of whether the statements are true or false. The more knowledgeable evaluators treat true and false statements differently and penalize the knowledge they attribute to informants who produce specific yet false statements. Our findings demonstrate the power of a few words when assessing others' knowledge and have implications for how misinformation is processed differently between experts and novices.
{"title":"How experts and novices judge other people's knowledgeability from language use.","authors":"Alexander H Bower, Nicole Han, Ansh Soni, Miguel P Eckstein, Mark Steyvers","doi":"10.3758/s13423-023-02433-9","DOIUrl":"10.3758/s13423-023-02433-9","url":null,"abstract":"<p><p>How accurate are people in judging someone else's knowledge based on their language use, and do more knowledgeable people use different cues to make these judgments? We address this by recruiting a group of participants (\"informants\") to answer general knowledge questions and describe various images belonging to different categories (e.g., cartoons, basketball). A second group of participants (\"evaluators\") also answer general knowledge questions and decide who is more knowledgeable within pairs of informants, based on these descriptions. Evaluators perform above chance at identifying the most knowledgeable informants (65% with only one description available). The less knowledgeable evaluators base their decisions on the number of specific statements, regardless of whether the statements are true or false. The more knowledgeable evaluators treat true and false statements differently and penalize the knowledge they attribute to informants who produce specific yet false statements. Our findings demonstrate the power of a few words when assessing others' knowledge and have implications for how misinformation is processed differently between experts and novices.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11358192/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139098468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-01-05DOI: 10.3758/s13423-023-02431-x
Daniel J Simons, Connor M Hults, Yifan Ding
People often fail to notice unexpected objects and events when they are performing an attention-demanding task, a phenomenon known as inattentional blindness. We might expect individual differences in cognitive ability or personality to predict who will and will not notice unexpected objects given that people vary in their ability to perform attention-demanding tasks. We conducted a comprehensive literature search for empirical inattentional blindness reports and identified 38 records that included individual difference measures and met our inclusion criteria. From those, we extracted individual difference effect sizes for 31 records which included a total of 74 distinct, between-groups samples with at least one codable individual difference measure. We conducted separate meta-analyses of the relationship between noticing/missing an unexpected object and scores on each of the 14 cognitive and 19 personality measures in this dataset. We also aggregated across personality measures reflecting positive/negative affectivity or openness/absorption and cognitive measures of interference, attention breadth, and memory. Collectively, these meta-analyses provided little evidence that individual differences in ability or personality predict noticing of an unexpected object. A robustness analysis that excluded samples with extremely low numbers of people who noticed or missed produced similar results. For most measures, the number of samples and the total sample sizes were small, and larger studies are needed to examine individual differences in inattentional blindness more systematically. However, the results are consistent with the idea that noticing of unexpected objects or events differs from deliberate attentional control tasks in that it is not reliably predicted by individual differences in cognitive ability.
{"title":"Individual differences in inattentional blindness.","authors":"Daniel J Simons, Connor M Hults, Yifan Ding","doi":"10.3758/s13423-023-02431-x","DOIUrl":"10.3758/s13423-023-02431-x","url":null,"abstract":"<p><p>People often fail to notice unexpected objects and events when they are performing an attention-demanding task, a phenomenon known as inattentional blindness. We might expect individual differences in cognitive ability or personality to predict who will and will not notice unexpected objects given that people vary in their ability to perform attention-demanding tasks. We conducted a comprehensive literature search for empirical inattentional blindness reports and identified 38 records that included individual difference measures and met our inclusion criteria. From those, we extracted individual difference effect sizes for 31 records which included a total of 74 distinct, between-groups samples with at least one codable individual difference measure. We conducted separate meta-analyses of the relationship between noticing/missing an unexpected object and scores on each of the 14 cognitive and 19 personality measures in this dataset. We also aggregated across personality measures reflecting positive/negative affectivity or openness/absorption and cognitive measures of interference, attention breadth, and memory. Collectively, these meta-analyses provided little evidence that individual differences in ability or personality predict noticing of an unexpected object. A robustness analysis that excluded samples with extremely low numbers of people who noticed or missed produced similar results. For most measures, the number of samples and the total sample sizes were small, and larger studies are needed to examine individual differences in inattentional blindness more systematically. However, the results are consistent with the idea that noticing of unexpected objects or events differs from deliberate attentional control tasks in that it is not reliably predicted by individual differences in cognitive ability.</p>","PeriodicalId":20763,"journal":{"name":"Psychonomic Bulletin & Review","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139106535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}