Pub Date : 2025-01-07DOI: 10.3758/s13428-024-02582-2
Noémie Auclair-Ouellet, Alexandra Lavoie, Pascale Bédard, Alexandra Barbeau-Morrison, Patrick Drouin, Pascale Tremblay
Having a detailed description of the psycholinguistic properties of a language is essential for conducting well-controlled language experiments. However, there is a paucity of databases for some languages and regional varieties, including Québec French. The SyllabO+ corpus was created to provide a complete phonological and syllabic analysis of a corpus of spoken Québec French. In the present study, the corpus was expanded with 41 additional speakers, bringing the total to 225. The analysis was also expanded to include three new databases: unique words, lemmas, and morphemes (inflectional, derivational, and compounds). Next, the internal structure of unique words was analyzed to identify roots, inflectional markers, and affixes, as well as the components of compounds. Additionally, a group of 441 speakers of Québec French provided semantic transparency ratings for 3764 derived words. Results from the semantic transparency judgment study show broad inter-individual variability for words of medium transparency. No influence of sociodemographic variables was found. Transparency ratings are coherent with studies showing the greater transparency of suffixed words compared to prefixed words. Results for participants who speak French as a second language support the association between second-language proficiency and morphological processing.
{"title":"Expansion of the SyllabO+ corpus and database: Words, lemmas, and morphology.","authors":"Noémie Auclair-Ouellet, Alexandra Lavoie, Pascale Bédard, Alexandra Barbeau-Morrison, Patrick Drouin, Pascale Tremblay","doi":"10.3758/s13428-024-02582-2","DOIUrl":"10.3758/s13428-024-02582-2","url":null,"abstract":"<p><p>Having a detailed description of the psycholinguistic properties of a language is essential for conducting well-controlled language experiments. However, there is a paucity of databases for some languages and regional varieties, including Québec French. The SyllabO+ corpus was created to provide a complete phonological and syllabic analysis of a corpus of spoken Québec French. In the present study, the corpus was expanded with 41 additional speakers, bringing the total to 225. The analysis was also expanded to include three new databases: unique words, lemmas, and morphemes (inflectional, derivational, and compounds). Next, the internal structure of unique words was analyzed to identify roots, inflectional markers, and affixes, as well as the components of compounds. Additionally, a group of 441 speakers of Québec French provided semantic transparency ratings for 3764 derived words. Results from the semantic transparency judgment study show broad inter-individual variability for words of medium transparency. No influence of sociodemographic variables was found. Transparency ratings are coherent with studies showing the greater transparency of suffixed words compared to prefixed words. Results for participants who speak French as a second language support the association between second-language proficiency and morphological processing.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"47"},"PeriodicalIF":4.6,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142943474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-07DOI: 10.3758/s13428-024-02586-y
Noga Segal-Gordon, Yoav Bar-Anan
The Beyond Reality Image Collection (BRIC) is a set of 648 photos, some painted by an artist and some generated by artificial intelligence. Unlike previous photosets, the BRIC focused on nonrealistic visuals. This collection includes abstract and non-abstract paintings and nonrealistic photographs depicting objects, scenes, animals, humans, and fantastical creatures with varying degrees of unreal elements. We collected evaluative ratings of the photos, using a convenience sample of 16,208 participants in a total of 25,321 sessions. We used multiple evaluation measures: binary positive/negative and like/dislike categorization, seven-point ratings on these attributes, both under no time pressure and under time pressure, and evaluative priming scores. The mean evaluation of the photos on the different measures was highly correlated, but some photos consistently elicited a discrepant evaluative reaction between the measures. The BRIC is a valuable resource for eliciting evaluative reactions and can contribute to research on evaluative processes and affective responses.
{"title":"The Beyond Reality Image Collection (BRIC).","authors":"Noga Segal-Gordon, Yoav Bar-Anan","doi":"10.3758/s13428-024-02586-y","DOIUrl":"10.3758/s13428-024-02586-y","url":null,"abstract":"<p><p>The Beyond Reality Image Collection (BRIC) is a set of 648 photos, some painted by an artist and some generated by artificial intelligence. Unlike previous photosets, the BRIC focused on nonrealistic visuals. This collection includes abstract and non-abstract paintings and nonrealistic photographs depicting objects, scenes, animals, humans, and fantastical creatures with varying degrees of unreal elements. We collected evaluative ratings of the photos, using a convenience sample of 16,208 participants in a total of 25,321 sessions. We used multiple evaluation measures: binary positive/negative and like/dislike categorization, seven-point ratings on these attributes, both under no time pressure and under time pressure, and evaluative priming scores. The mean evaluation of the photos on the different measures was highly correlated, but some photos consistently elicited a discrepant evaluative reaction between the measures. The BRIC is a valuable resource for eliciting evaluative reactions and can contribute to research on evaluative processes and affective responses.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"49"},"PeriodicalIF":4.6,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11706899/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142943480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-07DOI: 10.3758/s13428-024-02576-0
Sascha B Duken, Jun Moriya, Colette Hirsch, Marcella L Woud, Bram van Bockstaele, Elske Salemink
People with social anxiety disorder tend to interpret ambiguous social information in a negative rather than positive manner. Such interpretation biases may cause and maintain anxiety symptoms. However, there is considerable variability in the observed effects across studies, with some not finding a relationship between interpretation biases and social anxiety. Poor psychometric properties of interpretation bias measures may explain such inconsistent findings. We evaluated the internal consistency, test-retest reliability, convergent validity, and concurrent validity of four interpretation bias measures, ranging from more implicit and automatic to more explicit and reflective: the probe scenario task, the recognition task, the scrambled sentences task, and the interpretation and judgmental bias questionnaire. Young adults (N = 94) completed interpretation bias measures in two sessions separated by one week. Psychometric properties were poor for the probe scenario and not acceptable for the recognition task. The reliability of the scrambled sentences task and the interpretation and judgmental bias questionnaire was good, and they correlated highly with social anxiety and each other, supporting their concurrent and convergent validity. However, there are methodological challenges that should be considered when measuring interpretation biases, even if psychometric indices suggest high measurement validity. We also discuss likely reasons for poor psychometric properties of some tasks and suggest potential solutions to improve the assessment of implicit and automatic biases in social anxiety in future research.
{"title":"Reliability and validity of four cognitive interpretation bias measures in the context of social anxiety.","authors":"Sascha B Duken, Jun Moriya, Colette Hirsch, Marcella L Woud, Bram van Bockstaele, Elske Salemink","doi":"10.3758/s13428-024-02576-0","DOIUrl":"10.3758/s13428-024-02576-0","url":null,"abstract":"<p><p>People with social anxiety disorder tend to interpret ambiguous social information in a negative rather than positive manner. Such interpretation biases may cause and maintain anxiety symptoms. However, there is considerable variability in the observed effects across studies, with some not finding a relationship between interpretation biases and social anxiety. Poor psychometric properties of interpretation bias measures may explain such inconsistent findings. We evaluated the internal consistency, test-retest reliability, convergent validity, and concurrent validity of four interpretation bias measures, ranging from more implicit and automatic to more explicit and reflective: the probe scenario task, the recognition task, the scrambled sentences task, and the interpretation and judgmental bias questionnaire. Young adults (N = 94) completed interpretation bias measures in two sessions separated by one week. Psychometric properties were poor for the probe scenario and not acceptable for the recognition task. The reliability of the scrambled sentences task and the interpretation and judgmental bias questionnaire was good, and they correlated highly with social anxiety and each other, supporting their concurrent and convergent validity. However, there are methodological challenges that should be considered when measuring interpretation biases, even if psychometric indices suggest high measurement validity. We also discuss likely reasons for poor psychometric properties of some tasks and suggest potential solutions to improve the assessment of implicit and automatic biases in social anxiety in future research.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"48"},"PeriodicalIF":4.6,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11706852/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142943477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-06DOI: 10.3758/s13428-024-02522-0
Ana Levordashka, Mike Richardson, Rebecca J Hirst, Iain D Gilchrist, Danaë Stanton Fraser
Measuring attention and engagement is essential for understanding a wide range of psychological phenomena. Advances in technology have made it possible to measure real-time attention to naturalistic stimuli, providing ecologically valid insight into temporal dynamics. We developed a research protocol called Trace, which records anonymous facial landmarks, expressions, and patterns of movement associated with engagement in screen-based media. Trace runs in a standard internet browser and resembles a contemporary media player. It is embedded in the open-source package PsychoJS (the JavaScript sister library of PsychoPy) hosted via Pavlovia, and can be integrated with a wide range of behavioral research methods. Developed over multiple iterations and tested with over 200 participants in three studies, including the official broadcast of a major theatre production, Trace is a powerful, user-friendly protocol allowing behavioral researchers to capture audience attention and engagement in screen-based media as part of authentic, ecologically valid audience experiences.
{"title":"Trace: A research media player measuring real-time audience engagement.","authors":"Ana Levordashka, Mike Richardson, Rebecca J Hirst, Iain D Gilchrist, Danaë Stanton Fraser","doi":"10.3758/s13428-024-02522-0","DOIUrl":"10.3758/s13428-024-02522-0","url":null,"abstract":"<p><p>Measuring attention and engagement is essential for understanding a wide range of psychological phenomena. Advances in technology have made it possible to measure real-time attention to naturalistic stimuli, providing ecologically valid insight into temporal dynamics. We developed a research protocol called Trace, which records anonymous facial landmarks, expressions, and patterns of movement associated with engagement in screen-based media. Trace runs in a standard internet browser and resembles a contemporary media player. It is embedded in the open-source package PsychoJS (the JavaScript sister library of PsychoPy) hosted via Pavlovia, and can be integrated with a wide range of behavioral research methods. Developed over multiple iterations and tested with over 200 participants in three studies, including the official broadcast of a major theatre production, Trace is a powerful, user-friendly protocol allowing behavioral researchers to capture audience attention and engagement in screen-based media as part of authentic, ecologically valid audience experiences.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"44"},"PeriodicalIF":4.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11703984/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142943482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-06DOI: 10.3758/s13428-024-02542-w
Aylin König, Uwe Thomas, Frank Bremmer, Stefan Dowiasch
The analysis of eye movements is a noninvasive, reliable and fast method to detect and quantify brain (dys)function. Here, we investigated the performance of two novel eye-trackers-the Thomas Oculus Motus-research mobile (TOM-rm) and the TOM-research stationary (TOM-rs)-and compared them with the performance of a well-established video-based eye-tracker, i.e., the EyeLink 1000 Plus (EL). The TOM-rm is a fully integrated, tablet-based mobile device that presents visual stimuli and records head-unrestrained eye movements at 30 Hz without additional infrared illumination. The TOM-rs is a stationary, video-based eye-tracker that records eye movements at either high spatial or high temporal resolution. We compared the performance of all three eye-trackers in two different behavioral tasks: pro- and anti-saccade and free viewing. We collected data from 30 human subjects while running all three eye-tracking devices in parallel. Parameters requiring a high spatial or temporal resolution (e.g., saccade latency or gain), as derived from the data, differed significantly between the EL and the TOM-rm in both tasks. Differences between results derived from the TOM-rs and the EL were most likely due to experimental conditions, which could not be optimized for both systems simultaneously. We conclude that the TOM-rm can be used for measuring basic eye-movement parameters, such as the error rate in a typical pro- and anti-saccade task, or the number and position of fixations in a visual foraging task, reliably at comparably low spatial and temporal resolution. The TOM-rs, on the other hand, can provide high-resolution oculomotor data at least on a par with an established reference system.
眼动分析是一种无创、可靠、快速的检测和量化脑功能的方法。在这里,我们研究了两种新型眼动仪的性能——Thomas Oculus Motus-research mobile (TOM-rm)和TOM-research stationary (TOM-rs)——并将它们与一种成熟的基于视频的眼动仪EyeLink 1000 Plus (EL)的性能进行了比较。TOM-rm是一个完全集成的、基于平板电脑的移动设备,它提供视觉刺激,并记录头部不受限制的眼球运动,频率为30赫兹,不需要额外的红外照明。TOM-rs是一种静止的、基于视频的眼球追踪器,可以记录高空间或高时间分辨率的眼球运动。我们比较了三种眼球追踪器在两种不同行为任务中的表现:支持和反对扫视以及自由观看。我们在同时运行三种眼球追踪设备的同时收集了30名受试者的数据。从数据中得出的需要高空间或时间分辨率的参数(例如,扫视延迟或增益)在两个任务中在EL和TOM-rm之间存在显着差异。从TOM-rs和EL得到的结果之间的差异很可能是由于实验条件,不能同时对两个系统进行优化。我们的结论是,在相对较低的空间和时间分辨率下,TOM-rm可以可靠地用于测量基本的眼动参数,如典型的前扫视和反扫视任务中的错误率,或视觉觅食任务中注视的数量和位置。另一方面,TOM-rs可以提供高分辨率的眼球运动数据,至少与现有的参考系统相当。
{"title":"Quantitative comparison of a mobile, tablet-based eye-tracker and two stationary, video-based eye-trackers.","authors":"Aylin König, Uwe Thomas, Frank Bremmer, Stefan Dowiasch","doi":"10.3758/s13428-024-02542-w","DOIUrl":"10.3758/s13428-024-02542-w","url":null,"abstract":"<p><p>The analysis of eye movements is a noninvasive, reliable and fast method to detect and quantify brain (dys)function. Here, we investigated the performance of two novel eye-trackers-the Thomas Oculus Motus-research mobile (TOM-rm) and the TOM-research stationary (TOM-rs)-and compared them with the performance of a well-established video-based eye-tracker, i.e., the EyeLink 1000 Plus (EL). The TOM-rm is a fully integrated, tablet-based mobile device that presents visual stimuli and records head-unrestrained eye movements at 30 Hz without additional infrared illumination. The TOM-rs is a stationary, video-based eye-tracker that records eye movements at either high spatial or high temporal resolution. We compared the performance of all three eye-trackers in two different behavioral tasks: pro- and anti-saccade and free viewing. We collected data from 30 human subjects while running all three eye-tracking devices in parallel. Parameters requiring a high spatial or temporal resolution (e.g., saccade latency or gain), as derived from the data, differed significantly between the EL and the TOM-rm in both tasks. Differences between results derived from the TOM-rs and the EL were most likely due to experimental conditions, which could not be optimized for both systems simultaneously. We conclude that the TOM-rm can be used for measuring basic eye-movement parameters, such as the error rate in a typical pro- and anti-saccade task, or the number and position of fixations in a visual foraging task, reliably at comparably low spatial and temporal resolution. The TOM-rs, on the other hand, can provide high-resolution oculomotor data at least on a par with an established reference system.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"45"},"PeriodicalIF":4.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11703885/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142943476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-06DOI: 10.3758/s13428-024-02529-7
Diederick C Niehorster, Marcus Nyström, Roy S Hessels, Richard Andersson, Jeroen S Benjamins, Dan Witzner Hansen, Ignace T C Hooge
Researchers using eye tracking are heavily dependent on software and hardware tools to perform their studies, from recording eye tracking data and visualizing it, to processing and analyzing it. This article provides an overview of available tools for research using eye trackers and discusses considerations to make when choosing which tools to adopt for one's study.
{"title":"The fundamentals of eye tracking part 4: Tools for conducting an eye tracking study.","authors":"Diederick C Niehorster, Marcus Nyström, Roy S Hessels, Richard Andersson, Jeroen S Benjamins, Dan Witzner Hansen, Ignace T C Hooge","doi":"10.3758/s13428-024-02529-7","DOIUrl":"10.3758/s13428-024-02529-7","url":null,"abstract":"<p><p>Researchers using eye tracking are heavily dependent on software and hardware tools to perform their studies, from recording eye tracking data and visualizing it, to processing and analyzing it. This article provides an overview of available tools for research using eye trackers and discusses considerations to make when choosing which tools to adopt for one's study.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"46"},"PeriodicalIF":4.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11703944/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142943481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-03DOI: 10.3758/s13428-024-02550-w
Anne-Sophie Puffet, Simon Rigoulot
Frequently, we perceive emotional information through multiple channels (e.g., face, voice, posture). These cues interact, facilitating emotional perception when congruent (similar across channels) compared to incongruent (different). Most previous studies on this congruency effect used stimuli from different sets, compromising their quality. In this context, we created and validated a new static stimulus set (ECIFBSS) featuring 1952 facial and body expressions of basic emotions in congruent and incongruent situations. We photographed 40 actors expressing facial emotions and body postures (anger, disgust, happiness, neutral, fear, surprise, and sadness) in both congruent and incongruent situations. The validation was conducted in two parts. In the first part, 76 participants performed a recognition task on facial and bodily expressions separately. In the second part, 40 participants performed the same recognition task, along with an evaluation of four features: intensity, authenticity, arousal, and valence. All emotions (face and body) were well recognized. Consistent with the literature, facial emotions were recognized better than body postures. Happiness was the most recognized facial emotion, while fear was the least. Among body expressions, anger had the highest recognition, while disgust was the least accurately recognized. Finally, facial and bodily expressions were considered moderately authentic, and the evaluation of intensity, valence, and arousal aligned with the dimensional model. The ECIFBSS offers static stimuli for studying facial and body expressions of basic emotions, providing a new tool to explore integrating emotional information from various channels and their reciprocal influence.
{"title":"Validation of the Emotionally Congruent and Incongruent Face-Body Static Set (ECIFBSS).","authors":"Anne-Sophie Puffet, Simon Rigoulot","doi":"10.3758/s13428-024-02550-w","DOIUrl":"10.3758/s13428-024-02550-w","url":null,"abstract":"<p><p>Frequently, we perceive emotional information through multiple channels (e.g., face, voice, posture). These cues interact, facilitating emotional perception when congruent (similar across channels) compared to incongruent (different). Most previous studies on this congruency effect used stimuli from different sets, compromising their quality. In this context, we created and validated a new static stimulus set (ECIFBSS) featuring 1952 facial and body expressions of basic emotions in congruent and incongruent situations. We photographed 40 actors expressing facial emotions and body postures (anger, disgust, happiness, neutral, fear, surprise, and sadness) in both congruent and incongruent situations. The validation was conducted in two parts. In the first part, 76 participants performed a recognition task on facial and bodily expressions separately. In the second part, 40 participants performed the same recognition task, along with an evaluation of four features: intensity, authenticity, arousal, and valence. All emotions (face and body) were well recognized. Consistent with the literature, facial emotions were recognized better than body postures. Happiness was the most recognized facial emotion, while fear was the least. Among body expressions, anger had the highest recognition, while disgust was the least accurately recognized. Finally, facial and bodily expressions were considered moderately authentic, and the evaluation of intensity, valence, and arousal aligned with the dimensional model. The ECIFBSS offers static stimuli for studying facial and body expressions of basic emotions, providing a new tool to explore integrating emotional information from various channels and their reciprocal influence.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"41"},"PeriodicalIF":4.6,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142926346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-03DOI: 10.3758/s13428-024-02580-4
Wei Yi, Haitao Xu, Kaiwen Man
Perception of emotion conveyed through language is influenced by embodied experiences obtained from social interactions, which may vary across different cultures. To explore cross-cultural differences in the perception of emotion between Chinese and English speakers, this study collected norms of valence and arousal from 322 native Mandarin speakers for 4923 Chinese words translated from Warriner et al., (Behavior Research Methods, 45, 1191-1207, 2013). Additionally, sensory experience ratings for each word were collected. Analysis demonstrated that the reliability of this dataset is satisfactory, as indicated by comparisons with previous datasets. We examined the distributions of valence and arousal for the entire dataset, as well as for positive and negative emotion categories. Further analysis suggested that valence, arousal, and sensory experience correlated with various psycholinguistic variables, including the number of syllables, number of strokes, imageability, familiarity, concreteness, frequency, and age of acquisition. Cross-language comparison indicated that native speakers of Chinese and English differ in their perception of emotional valence and arousal, largely due to cross-cultural variations associated with ecological, sociopolitical, and religious factors. This dataset will be a valuable resource for research examining the impact of emotional and sensory information on Chinese lexical processing, as well as for bilingual research investigating the interplay between language and emotion across different cultural contexts.
{"title":"Perception of emotion across cultures: Norms of valence, arousal, and sensory experience for 4923 Chinese words translated from English in Warriner et al. (2013).","authors":"Wei Yi, Haitao Xu, Kaiwen Man","doi":"10.3758/s13428-024-02580-4","DOIUrl":"10.3758/s13428-024-02580-4","url":null,"abstract":"<p><p>Perception of emotion conveyed through language is influenced by embodied experiences obtained from social interactions, which may vary across different cultures. To explore cross-cultural differences in the perception of emotion between Chinese and English speakers, this study collected norms of valence and arousal from 322 native Mandarin speakers for 4923 Chinese words translated from Warriner et al., (Behavior Research Methods, 45, 1191-1207, 2013). Additionally, sensory experience ratings for each word were collected. Analysis demonstrated that the reliability of this dataset is satisfactory, as indicated by comparisons with previous datasets. We examined the distributions of valence and arousal for the entire dataset, as well as for positive and negative emotion categories. Further analysis suggested that valence, arousal, and sensory experience correlated with various psycholinguistic variables, including the number of syllables, number of strokes, imageability, familiarity, concreteness, frequency, and age of acquisition. Cross-language comparison indicated that native speakers of Chinese and English differ in their perception of emotional valence and arousal, largely due to cross-cultural variations associated with ecological, sociopolitical, and religious factors. This dataset will be a valuable resource for research examining the impact of emotional and sensory information on Chinese lexical processing, as well as for bilingual research investigating the interplay between language and emotion across different cultural contexts.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"43"},"PeriodicalIF":4.6,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142926336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-03DOI: 10.3758/s13428-024-02569-z
Sebastian Michelmann, Manoj Kumar, Kenneth A Norman, Mariya Toneva
Humans perceive discrete events such as "restaurant visits" and "train rides" in their continuous experience. One important prerequisite for studying human event perception is the ability of researchers to quantify when one event ends and another begins. Typically, this information is derived by aggregating behavioral annotations from several observers. Here, we present an alternative computational approach where event boundaries are derived using a large language model, GPT-3, instead of using human annotations. We demonstrate that GPT-3 can segment continuous narrative text into events. GPT-3-annotated events are significantly correlated with human event annotations. Furthermore, these GPT-derived annotations achieve a good approximation of the "consensus" solution (obtained by averaging across human annotations); the boundaries identified by GPT-3 are closer to the consensus, on average, than boundaries identified by individual human annotators. This finding suggests that GPT-3 provides a feasible solution for automated event annotations, and it demonstrates a further parallel between human cognition and prediction in large language models. In the future, GPT-3 may thereby help to elucidate the principles underlying human event perception.
{"title":"Large language models can segment narrative events similarly to humans.","authors":"Sebastian Michelmann, Manoj Kumar, Kenneth A Norman, Mariya Toneva","doi":"10.3758/s13428-024-02569-z","DOIUrl":"10.3758/s13428-024-02569-z","url":null,"abstract":"<p><p>Humans perceive discrete events such as \"restaurant visits\" and \"train rides\" in their continuous experience. One important prerequisite for studying human event perception is the ability of researchers to quantify when one event ends and another begins. Typically, this information is derived by aggregating behavioral annotations from several observers. Here, we present an alternative computational approach where event boundaries are derived using a large language model, GPT-3, instead of using human annotations. We demonstrate that GPT-3 can segment continuous narrative text into events. GPT-3-annotated events are significantly correlated with human event annotations. Furthermore, these GPT-derived annotations achieve a good approximation of the \"consensus\" solution (obtained by averaging across human annotations); the boundaries identified by GPT-3 are closer to the consensus, on average, than boundaries identified by individual human annotators. This finding suggests that GPT-3 provides a feasible solution for automated event annotations, and it demonstrates a further parallel between human cognition and prediction in large language models. In the future, GPT-3 may thereby help to elucidate the principles underlying human event perception.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"39"},"PeriodicalIF":4.6,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11810054/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142920531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-03DOI: 10.3758/s13428-024-02535-9
Elena Allegretti, Giorgia D'Innocenzo, Moreno I Coco
The complex interplay between low- and high-level mechanisms governing our visual system can only be fully understood within ecologically valid naturalistic contexts. For this reason, in recent years, substantial efforts have been devoted to equipping the scientific community with datasets of realistic images normed on semantic or spatial features. Here, we introduce VISIONS, an extensive database of 1136 naturalistic scenes normed on a wide range of perceptual and conceptual norms by 185 English speakers across three levels of granularity: isolated object, whole scene, and object-in-scene. Each naturalistic scene contains a critical object systematically manipulated and normed regarding its semantic consistency (e.g., a toothbrush vs. a flashlight in a bathroom) and spatial position (i.e., left, right). Normative data are also available for low- (i.e., clarity, visual complexity) and high-level (i.e., name agreement, confidence, familiarity, prototypicality, manipulability) features of the critical object and its embedding scene context. Eye-tracking data during a free-viewing task further confirms the experimental validity of our manipulations while theoretically demonstrating that object semantics is acquired in extra-foveal vision and used to guide early overt attention. To our knowledge, VISIONS is the first database exhaustively covering norms about integrating objects in scenes and providing several perceptual and conceptual norms of the two as independently taken. We expect VISIONS to become an invaluable image dataset to examine and answer timely questions above and beyond vision science, where a diversity of perceptual, attentive, mnemonic, or linguistic processes could be explored as they develop, age, or become neuropathological.
{"title":"The Visual Integration of Semantic and Spatial Information of Objects in Naturalistic Scenes (VISIONS) database: attentional, conceptual, and perceptual norms.","authors":"Elena Allegretti, Giorgia D'Innocenzo, Moreno I Coco","doi":"10.3758/s13428-024-02535-9","DOIUrl":"10.3758/s13428-024-02535-9","url":null,"abstract":"<p><p>The complex interplay between low- and high-level mechanisms governing our visual system can only be fully understood within ecologically valid naturalistic contexts. For this reason, in recent years, substantial efforts have been devoted to equipping the scientific community with datasets of realistic images normed on semantic or spatial features. Here, we introduce VISIONS, an extensive database of 1136 naturalistic scenes normed on a wide range of perceptual and conceptual norms by 185 English speakers across three levels of granularity: isolated object, whole scene, and object-in-scene. Each naturalistic scene contains a critical object systematically manipulated and normed regarding its semantic consistency (e.g., a toothbrush vs. a flashlight in a bathroom) and spatial position (i.e., left, right). Normative data are also available for low- (i.e., clarity, visual complexity) and high-level (i.e., name agreement, confidence, familiarity, prototypicality, manipulability) features of the critical object and its embedding scene context. Eye-tracking data during a free-viewing task further confirms the experimental validity of our manipulations while theoretically demonstrating that object semantics is acquired in extra-foveal vision and used to guide early overt attention. To our knowledge, VISIONS is the first database exhaustively covering norms about integrating objects in scenes and providing several perceptual and conceptual norms of the two as independently taken. We expect VISIONS to become an invaluable image dataset to examine and answer timely questions above and beyond vision science, where a diversity of perceptual, attentive, mnemonic, or linguistic processes could be explored as they develop, age, or become neuropathological.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"42"},"PeriodicalIF":4.6,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142926337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}