首页 > 最新文献

Behavior Research Methods最新文献

英文 中文
Expansion of the SyllabO+ corpus and database: Words, lemmas, and morphology. 教学大纲+语料库和数据库的扩展:词、引理和词法。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-01-07 DOI: 10.3758/s13428-024-02582-2
Noémie Auclair-Ouellet, Alexandra Lavoie, Pascale Bédard, Alexandra Barbeau-Morrison, Patrick Drouin, Pascale Tremblay

Having a detailed description of the psycholinguistic properties of a language is essential for conducting well-controlled language experiments. However, there is a paucity of databases for some languages and regional varieties, including Québec French. The SyllabO+ corpus was created to provide a complete phonological and syllabic analysis of a corpus of spoken Québec French. In the present study, the corpus was expanded with 41 additional speakers, bringing the total to 225. The analysis was also expanded to include three new databases: unique words, lemmas, and morphemes (inflectional, derivational, and compounds). Next, the internal structure of unique words was analyzed to identify roots, inflectional markers, and affixes, as well as the components of compounds. Additionally, a group of 441 speakers of Québec French provided semantic transparency ratings for 3764 derived words. Results from the semantic transparency judgment study show broad inter-individual variability for words of medium transparency. No influence of sociodemographic variables was found. Transparency ratings are coherent with studies showing the greater transparency of suffixed words compared to prefixed words. Results for participants who speak French as a second language support the association between second-language proficiency and morphological processing.

对语言的心理语言学特性有详细的描述对于进行控制良好的语言实验是必不可少的。但是,一些语言和区域变体的数据库缺乏,包括魁魁bec法语。音节+语料库的创建是为了提供一个完整的音系和音节分析语料库的口语曲海法语。在本研究中,语料库增加了41名发言者,使总数达到225名。分析还扩展到包括三个新的数据库:独特的词、引理和语素(屈折、衍生和复合词)。其次,分析独特词汇的内部结构,识别词根、屈折标记、词缀以及复合词的成分。此外,一组441名说qusamubec法语的人为3764个衍生词提供了语义透明度评级。语义透明度判断研究结果表明,中等透明度词汇的个体间差异较大。未发现社会人口学变量的影响。透明度评级与一些研究相一致,这些研究表明,带后缀的单词比带前缀的单词更透明。以法语为第二语言的参与者的结果支持第二语言熟练程度与词形加工之间的联系。
{"title":"Expansion of the SyllabO+ corpus and database: Words, lemmas, and morphology.","authors":"Noémie Auclair-Ouellet, Alexandra Lavoie, Pascale Bédard, Alexandra Barbeau-Morrison, Patrick Drouin, Pascale Tremblay","doi":"10.3758/s13428-024-02582-2","DOIUrl":"10.3758/s13428-024-02582-2","url":null,"abstract":"<p><p>Having a detailed description of the psycholinguistic properties of a language is essential for conducting well-controlled language experiments. However, there is a paucity of databases for some languages and regional varieties, including Québec French. The SyllabO+ corpus was created to provide a complete phonological and syllabic analysis of a corpus of spoken Québec French. In the present study, the corpus was expanded with 41 additional speakers, bringing the total to 225. The analysis was also expanded to include three new databases: unique words, lemmas, and morphemes (inflectional, derivational, and compounds). Next, the internal structure of unique words was analyzed to identify roots, inflectional markers, and affixes, as well as the components of compounds. Additionally, a group of 441 speakers of Québec French provided semantic transparency ratings for 3764 derived words. Results from the semantic transparency judgment study show broad inter-individual variability for words of medium transparency. No influence of sociodemographic variables was found. Transparency ratings are coherent with studies showing the greater transparency of suffixed words compared to prefixed words. Results for participants who speak French as a second language support the association between second-language proficiency and morphological processing.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"47"},"PeriodicalIF":4.6,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142943474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Beyond Reality Image Collection (BRIC). 超越现实图像集(BRIC)。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-01-07 DOI: 10.3758/s13428-024-02586-y
Noga Segal-Gordon, Yoav Bar-Anan

The Beyond Reality Image Collection (BRIC) is a set of 648 photos, some painted by an artist and some generated by artificial intelligence. Unlike previous photosets, the BRIC focused on nonrealistic visuals. This collection includes abstract and non-abstract paintings and nonrealistic photographs depicting objects, scenes, animals, humans, and fantastical creatures with varying degrees of unreal elements. We collected evaluative ratings of the photos, using a convenience sample of 16,208 participants in a total of 25,321 sessions. We used multiple evaluation measures: binary positive/negative and like/dislike categorization, seven-point ratings on these attributes, both under no time pressure and under time pressure, and evaluative priming scores. The mean evaluation of the photos on the different measures was highly correlated, but some photos consistently elicited a discrepant evaluative reaction between the measures. The BRIC is a valuable resource for eliciting evaluative reactions and can contribute to research on evaluative processes and affective responses.

“超越现实图片集”(BRIC)是一组648张照片,其中一些是艺术家绘制的,一些是人工智能生成的。与之前的摄影不同,金砖四国关注的是不真实的视觉效果。这个系列包括抽象和非抽象绘画和非现实主义的照片,描绘了不同程度的虚幻元素的物体,场景,动物,人类和幻想生物。我们收集了照片的评价等级,使用了16208个参与者的方便样本,总共25321次。我们使用了多种评估方法:二元积极/消极和喜欢/不喜欢的分类,在没有时间压力和有时间压力的情况下对这些属性进行7分评分,以及评估启动分数。照片在不同测量上的平均评价高度相关,但一些照片在测量之间始终引起不同的评价反应。金砖四国是引发评价性反应的宝贵资源,有助于对评价性过程和情感反应的研究。
{"title":"The Beyond Reality Image Collection (BRIC).","authors":"Noga Segal-Gordon, Yoav Bar-Anan","doi":"10.3758/s13428-024-02586-y","DOIUrl":"10.3758/s13428-024-02586-y","url":null,"abstract":"<p><p>The Beyond Reality Image Collection (BRIC) is a set of 648 photos, some painted by an artist and some generated by artificial intelligence. Unlike previous photosets, the BRIC focused on nonrealistic visuals. This collection includes abstract and non-abstract paintings and nonrealistic photographs depicting objects, scenes, animals, humans, and fantastical creatures with varying degrees of unreal elements. We collected evaluative ratings of the photos, using a convenience sample of 16,208 participants in a total of 25,321 sessions. We used multiple evaluation measures: binary positive/negative and like/dislike categorization, seven-point ratings on these attributes, both under no time pressure and under time pressure, and evaluative priming scores. The mean evaluation of the photos on the different measures was highly correlated, but some photos consistently elicited a discrepant evaluative reaction between the measures. The BRIC is a valuable resource for eliciting evaluative reactions and can contribute to research on evaluative processes and affective responses.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"49"},"PeriodicalIF":4.6,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11706899/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142943480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliability and validity of four cognitive interpretation bias measures in the context of social anxiety. 社交焦虑情境下四种认知解释偏差测量的信度和效度。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-01-07 DOI: 10.3758/s13428-024-02576-0
Sascha B Duken, Jun Moriya, Colette Hirsch, Marcella L Woud, Bram van Bockstaele, Elske Salemink

People with social anxiety disorder tend to interpret ambiguous social information in a negative rather than positive manner. Such interpretation biases may cause and maintain anxiety symptoms. However, there is considerable variability in the observed effects across studies, with some not finding a relationship between interpretation biases and social anxiety. Poor psychometric properties of interpretation bias measures may explain such inconsistent findings. We evaluated the internal consistency, test-retest reliability, convergent validity, and concurrent validity of four interpretation bias measures, ranging from more implicit and automatic to more explicit and reflective: the probe scenario task, the recognition task, the scrambled sentences task, and the interpretation and judgmental bias questionnaire. Young adults (N = 94) completed interpretation bias measures in two sessions separated by one week. Psychometric properties were poor for the probe scenario and not acceptable for the recognition task. The reliability of the scrambled sentences task and the interpretation and judgmental bias questionnaire was good, and they correlated highly with social anxiety and each other, supporting their concurrent and convergent validity. However, there are methodological challenges that should be considered when measuring interpretation biases, even if psychometric indices suggest high measurement validity. We also discuss likely reasons for poor psychometric properties of some tasks and suggest potential solutions to improve the assessment of implicit and automatic biases in social anxiety in future research.

患有社交焦虑障碍的人倾向于以消极而不是积极的方式解释模棱两可的社会信息。这种解释偏差可能导致并维持焦虑症状。然而,在不同的研究中,观察到的影响存在相当大的差异,有些研究没有发现解释偏见和社交焦虑之间的关系。解释偏差测量的不良心理测量特性可以解释这种不一致的发现。我们评估了探究情景任务、识别任务、乱句任务和解释与判断偏见问卷这四种解释偏见量表的内部一致性、重测信度、收敛效度和并发效度,这些量表从更隐式和自动到更显式和反思。年轻成人(N = 94)在间隔一周的两个阶段完成解释偏差测量。心理测量特性在探测场景中很差,在识别任务中也不能接受。乱句任务和解释与判断偏差问卷的信度较好,且与社交焦虑和其他问卷的信度高度相关,支持它们的并发效度和收敛效度。然而,在测量解释偏差时,即使心理测量指标显示高测量效度,也应该考虑方法上的挑战。我们还讨论了某些任务的心理测量特性较差的可能原因,并提出了在未来的研究中改进社会焦虑中内隐和自动偏见评估的潜在解决方案。
{"title":"Reliability and validity of four cognitive interpretation bias measures in the context of social anxiety.","authors":"Sascha B Duken, Jun Moriya, Colette Hirsch, Marcella L Woud, Bram van Bockstaele, Elske Salemink","doi":"10.3758/s13428-024-02576-0","DOIUrl":"10.3758/s13428-024-02576-0","url":null,"abstract":"<p><p>People with social anxiety disorder tend to interpret ambiguous social information in a negative rather than positive manner. Such interpretation biases may cause and maintain anxiety symptoms. However, there is considerable variability in the observed effects across studies, with some not finding a relationship between interpretation biases and social anxiety. Poor psychometric properties of interpretation bias measures may explain such inconsistent findings. We evaluated the internal consistency, test-retest reliability, convergent validity, and concurrent validity of four interpretation bias measures, ranging from more implicit and automatic to more explicit and reflective: the probe scenario task, the recognition task, the scrambled sentences task, and the interpretation and judgmental bias questionnaire. Young adults (N = 94) completed interpretation bias measures in two sessions separated by one week. Psychometric properties were poor for the probe scenario and not acceptable for the recognition task. The reliability of the scrambled sentences task and the interpretation and judgmental bias questionnaire was good, and they correlated highly with social anxiety and each other, supporting their concurrent and convergent validity. However, there are methodological challenges that should be considered when measuring interpretation biases, even if psychometric indices suggest high measurement validity. We also discuss likely reasons for poor psychometric properties of some tasks and suggest potential solutions to improve the assessment of implicit and automatic biases in social anxiety in future research.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"48"},"PeriodicalIF":4.6,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11706852/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142943477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trace: A research media player measuring real-time audience engagement. Trace:一款测量实时用户参与度的研究型媒体播放器。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-01-06 DOI: 10.3758/s13428-024-02522-0
Ana Levordashka, Mike Richardson, Rebecca J Hirst, Iain D Gilchrist, Danaë Stanton Fraser

Measuring attention and engagement is essential for understanding a wide range of psychological phenomena. Advances in technology have made it possible to measure real-time attention to naturalistic stimuli, providing ecologically valid insight into temporal dynamics. We developed a research protocol called Trace, which records anonymous facial landmarks, expressions, and patterns of movement associated with engagement in screen-based media. Trace runs in a standard internet browser and resembles a contemporary media player. It is embedded in the open-source package PsychoJS (the JavaScript sister library of PsychoPy) hosted via Pavlovia, and can be integrated with a wide range of behavioral research methods. Developed over multiple iterations and tested with over 200 participants in three studies, including the official broadcast of a major theatre production, Trace is a powerful, user-friendly protocol allowing behavioral researchers to capture audience attention and engagement in screen-based media as part of authentic, ecologically valid audience experiences.

衡量注意力和参与度对于理解广泛的心理现象至关重要。技术的进步使得测量对自然刺激的实时关注成为可能,为时间动态提供了生态学上有效的见解。我们制定了一项名为Trace的研究方案,记录与屏幕媒体相关的匿名面部标志、表情和运动模式。Trace运行在标准的网络浏览器中,类似于现代媒体播放器。它嵌入在通过Pavlovia托管的开源包PsychoJS (PsychoPy的JavaScript姊妹库)中,可以与广泛的行为研究方法集成。Trace是经过多次迭代开发的,并在三项研究中对200多名参与者进行了测试,其中包括大型戏剧制作的官方广播,Trace是一个强大的、用户友好的协议,允许行为研究人员捕捉观众的注意力,并将其作为真实的、生态有效的观众体验的一部分。
{"title":"Trace: A research media player measuring real-time audience engagement.","authors":"Ana Levordashka, Mike Richardson, Rebecca J Hirst, Iain D Gilchrist, Danaë Stanton Fraser","doi":"10.3758/s13428-024-02522-0","DOIUrl":"10.3758/s13428-024-02522-0","url":null,"abstract":"<p><p>Measuring attention and engagement is essential for understanding a wide range of psychological phenomena. Advances in technology have made it possible to measure real-time attention to naturalistic stimuli, providing ecologically valid insight into temporal dynamics. We developed a research protocol called Trace, which records anonymous facial landmarks, expressions, and patterns of movement associated with engagement in screen-based media. Trace runs in a standard internet browser and resembles a contemporary media player. It is embedded in the open-source package PsychoJS (the JavaScript sister library of PsychoPy) hosted via Pavlovia, and can be integrated with a wide range of behavioral research methods. Developed over multiple iterations and tested with over 200 participants in three studies, including the official broadcast of a major theatre production, Trace is a powerful, user-friendly protocol allowing behavioral researchers to capture audience attention and engagement in screen-based media as part of authentic, ecologically valid audience experiences.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"44"},"PeriodicalIF":4.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11703984/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142943482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantitative comparison of a mobile, tablet-based eye-tracker and two stationary, video-based eye-trackers. 一种基于平板电脑的移动眼动仪和两种基于视频的固定眼动仪的定量比较。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-01-06 DOI: 10.3758/s13428-024-02542-w
Aylin König, Uwe Thomas, Frank Bremmer, Stefan Dowiasch

The analysis of eye movements is a noninvasive, reliable and fast method to detect and quantify brain (dys)function. Here, we investigated the performance of two novel eye-trackers-the Thomas Oculus Motus-research mobile (TOM-rm) and the TOM-research stationary (TOM-rs)-and compared them with the performance of a well-established video-based eye-tracker, i.e., the EyeLink 1000 Plus (EL). The TOM-rm is a fully integrated, tablet-based mobile device that presents visual stimuli and records head-unrestrained eye movements at 30 Hz without additional infrared illumination. The TOM-rs is a stationary, video-based eye-tracker that records eye movements at either high spatial or high temporal resolution. We compared the performance of all three eye-trackers in two different behavioral tasks: pro- and anti-saccade and free viewing. We collected data from 30 human subjects while running all three eye-tracking devices in parallel. Parameters requiring a high spatial or temporal resolution (e.g., saccade latency or gain), as derived from the data, differed significantly between the EL and the TOM-rm in both tasks. Differences between results derived from the TOM-rs and the EL were most likely due to experimental conditions, which could not be optimized for both systems simultaneously. We conclude that the TOM-rm can be used for measuring basic eye-movement parameters, such as the error rate in a typical pro- and anti-saccade task, or the number and position of fixations in a visual foraging task, reliably at comparably low spatial and temporal resolution. The TOM-rs, on the other hand, can provide high-resolution oculomotor data at least on a par with an established reference system.

眼动分析是一种无创、可靠、快速的检测和量化脑功能的方法。在这里,我们研究了两种新型眼动仪的性能——Thomas Oculus Motus-research mobile (TOM-rm)和TOM-research stationary (TOM-rs)——并将它们与一种成熟的基于视频的眼动仪EyeLink 1000 Plus (EL)的性能进行了比较。TOM-rm是一个完全集成的、基于平板电脑的移动设备,它提供视觉刺激,并记录头部不受限制的眼球运动,频率为30赫兹,不需要额外的红外照明。TOM-rs是一种静止的、基于视频的眼球追踪器,可以记录高空间或高时间分辨率的眼球运动。我们比较了三种眼球追踪器在两种不同行为任务中的表现:支持和反对扫视以及自由观看。我们在同时运行三种眼球追踪设备的同时收集了30名受试者的数据。从数据中得出的需要高空间或时间分辨率的参数(例如,扫视延迟或增益)在两个任务中在EL和TOM-rm之间存在显着差异。从TOM-rs和EL得到的结果之间的差异很可能是由于实验条件,不能同时对两个系统进行优化。我们的结论是,在相对较低的空间和时间分辨率下,TOM-rm可以可靠地用于测量基本的眼动参数,如典型的前扫视和反扫视任务中的错误率,或视觉觅食任务中注视的数量和位置。另一方面,TOM-rs可以提供高分辨率的眼球运动数据,至少与现有的参考系统相当。
{"title":"Quantitative comparison of a mobile, tablet-based eye-tracker and two stationary, video-based eye-trackers.","authors":"Aylin König, Uwe Thomas, Frank Bremmer, Stefan Dowiasch","doi":"10.3758/s13428-024-02542-w","DOIUrl":"10.3758/s13428-024-02542-w","url":null,"abstract":"<p><p>The analysis of eye movements is a noninvasive, reliable and fast method to detect and quantify brain (dys)function. Here, we investigated the performance of two novel eye-trackers-the Thomas Oculus Motus-research mobile (TOM-rm) and the TOM-research stationary (TOM-rs)-and compared them with the performance of a well-established video-based eye-tracker, i.e., the EyeLink 1000 Plus (EL). The TOM-rm is a fully integrated, tablet-based mobile device that presents visual stimuli and records head-unrestrained eye movements at 30 Hz without additional infrared illumination. The TOM-rs is a stationary, video-based eye-tracker that records eye movements at either high spatial or high temporal resolution. We compared the performance of all three eye-trackers in two different behavioral tasks: pro- and anti-saccade and free viewing. We collected data from 30 human subjects while running all three eye-tracking devices in parallel. Parameters requiring a high spatial or temporal resolution (e.g., saccade latency or gain), as derived from the data, differed significantly between the EL and the TOM-rm in both tasks. Differences between results derived from the TOM-rs and the EL were most likely due to experimental conditions, which could not be optimized for both systems simultaneously. We conclude that the TOM-rm can be used for measuring basic eye-movement parameters, such as the error rate in a typical pro- and anti-saccade task, or the number and position of fixations in a visual foraging task, reliably at comparably low spatial and temporal resolution. The TOM-rs, on the other hand, can provide high-resolution oculomotor data at least on a par with an established reference system.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"45"},"PeriodicalIF":4.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11703885/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142943476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The fundamentals of eye tracking part 4: Tools for conducting an eye tracking study. 眼动追踪的基本原理第4部分:进行眼动追踪研究的工具。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-01-06 DOI: 10.3758/s13428-024-02529-7
Diederick C Niehorster, Marcus Nyström, Roy S Hessels, Richard Andersson, Jeroen S Benjamins, Dan Witzner Hansen, Ignace T C Hooge

Researchers using eye tracking are heavily dependent on software and hardware tools to perform their studies, from recording eye tracking data and visualizing it, to processing and analyzing it. This article provides an overview of available tools for research using eye trackers and discusses considerations to make when choosing which tools to adopt for one's study.

使用眼动追踪的研究人员在很大程度上依赖于软件和硬件工具来完成他们的研究,从记录眼动追踪数据并将其可视化,到处理和分析它。本文概述了使用眼动仪进行研究的可用工具,并讨论了在选择研究工具时应考虑的因素。
{"title":"The fundamentals of eye tracking part 4: Tools for conducting an eye tracking study.","authors":"Diederick C Niehorster, Marcus Nyström, Roy S Hessels, Richard Andersson, Jeroen S Benjamins, Dan Witzner Hansen, Ignace T C Hooge","doi":"10.3758/s13428-024-02529-7","DOIUrl":"10.3758/s13428-024-02529-7","url":null,"abstract":"<p><p>Researchers using eye tracking are heavily dependent on software and hardware tools to perform their studies, from recording eye tracking data and visualizing it, to processing and analyzing it. This article provides an overview of available tools for research using eye trackers and discusses considerations to make when choosing which tools to adopt for one's study.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"46"},"PeriodicalIF":4.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11703944/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142943481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validation of the Emotionally Congruent and Incongruent Face-Body Static Set (ECIFBSS). 情绪一致与不一致脸-身体静态集的验证。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-01-03 DOI: 10.3758/s13428-024-02550-w
Anne-Sophie Puffet, Simon Rigoulot

Frequently, we perceive emotional information through multiple channels (e.g., face, voice, posture). These cues interact, facilitating emotional perception when congruent (similar across channels) compared to incongruent (different). Most previous studies on this congruency effect used stimuli from different sets, compromising their quality. In this context, we created and validated a new static stimulus set (ECIFBSS) featuring 1952 facial and body expressions of basic emotions in congruent and incongruent situations. We photographed 40 actors expressing facial emotions and body postures (anger, disgust, happiness, neutral, fear, surprise, and sadness) in both congruent and incongruent situations. The validation was conducted in two parts. In the first part, 76 participants performed a recognition task on facial and bodily expressions separately. In the second part, 40 participants performed the same recognition task, along with an evaluation of four features: intensity, authenticity, arousal, and valence. All emotions (face and body) were well recognized. Consistent with the literature, facial emotions were recognized better than body postures. Happiness was the most recognized facial emotion, while fear was the least. Among body expressions, anger had the highest recognition, while disgust was the least accurately recognized. Finally, facial and bodily expressions were considered moderately authentic, and the evaluation of intensity, valence, and arousal aligned with the dimensional model. The ECIFBSS offers static stimuli for studying facial and body expressions of basic emotions, providing a new tool to explore integrating emotional information from various channels and their reciprocal influence.

通常,我们通过多种渠道(例如,面部、声音、姿势)感知情绪信息。这些线索相互作用,促进情感感知时,一致(跨渠道相似)相比,不一致(不同)。以往大多数关于这种一致性效应的研究使用了来自不同集合的刺激,影响了它们的质量。在此背景下,我们创建并验证了一个新的静态刺激集(ECIFBSS),其中包含了在一致和不一致情况下1952种基本情绪的面部和身体表达。我们拍摄了40位演员在一致和不一致的情况下表达面部情绪和身体姿势(愤怒、厌恶、快乐、中性、恐惧、惊讶和悲伤)。验证分两部分进行。在第一部分中,76名参与者分别完成了面部和身体表情的识别任务。在第二部分中,40名参与者完成了同样的识别任务,并对四个特征进行了评估:强度、真实性、唤醒和效价。所有的情绪(面部和身体)都被很好地识别出来。与文献一致,面部情绪比身体姿势更容易被识别。快乐是最容易识别的面部表情,而恐惧是最不容易识别的。在身体表情中,愤怒的识别度最高,而厌恶的识别度最低。最后,面部和身体表情被认为是适度真实的,并且对强度、效价和唤醒的评估与维度模型一致。ECIFBSS为研究基本情绪的面部和身体表达提供了静态刺激,为探索整合各种渠道的情绪信息及其相互影响提供了新的工具。
{"title":"Validation of the Emotionally Congruent and Incongruent Face-Body Static Set (ECIFBSS).","authors":"Anne-Sophie Puffet, Simon Rigoulot","doi":"10.3758/s13428-024-02550-w","DOIUrl":"10.3758/s13428-024-02550-w","url":null,"abstract":"<p><p>Frequently, we perceive emotional information through multiple channels (e.g., face, voice, posture). These cues interact, facilitating emotional perception when congruent (similar across channels) compared to incongruent (different). Most previous studies on this congruency effect used stimuli from different sets, compromising their quality. In this context, we created and validated a new static stimulus set (ECIFBSS) featuring 1952 facial and body expressions of basic emotions in congruent and incongruent situations. We photographed 40 actors expressing facial emotions and body postures (anger, disgust, happiness, neutral, fear, surprise, and sadness) in both congruent and incongruent situations. The validation was conducted in two parts. In the first part, 76 participants performed a recognition task on facial and bodily expressions separately. In the second part, 40 participants performed the same recognition task, along with an evaluation of four features: intensity, authenticity, arousal, and valence. All emotions (face and body) were well recognized. Consistent with the literature, facial emotions were recognized better than body postures. Happiness was the most recognized facial emotion, while fear was the least. Among body expressions, anger had the highest recognition, while disgust was the least accurately recognized. Finally, facial and bodily expressions were considered moderately authentic, and the evaluation of intensity, valence, and arousal aligned with the dimensional model. The ECIFBSS offers static stimuli for studying facial and body expressions of basic emotions, providing a new tool to explore integrating emotional information from various channels and their reciprocal influence.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"41"},"PeriodicalIF":4.6,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142926346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perception of emotion across cultures: Norms of valence, arousal, and sensory experience for 4923 Chinese words translated from English in Warriner et al. (2013). 跨文化情感感知:Warriner et al.(2013)对4923个汉译英词汇的效价、觉醒和感官体验规范。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-01-03 DOI: 10.3758/s13428-024-02580-4
Wei Yi, Haitao Xu, Kaiwen Man

Perception of emotion conveyed through language is influenced by embodied experiences obtained from social interactions, which may vary across different cultures. To explore cross-cultural differences in the perception of emotion between Chinese and English speakers, this study collected norms of valence and arousal from 322 native Mandarin speakers for 4923 Chinese words translated from Warriner et al., (Behavior Research Methods, 45, 1191-1207, 2013). Additionally, sensory experience ratings for each word were collected. Analysis demonstrated that the reliability of this dataset is satisfactory, as indicated by comparisons with previous datasets. We examined the distributions of valence and arousal for the entire dataset, as well as for positive and negative emotion categories. Further analysis suggested that valence, arousal, and sensory experience correlated with various psycholinguistic variables, including the number of syllables, number of strokes, imageability, familiarity, concreteness, frequency, and age of acquisition. Cross-language comparison indicated that native speakers of Chinese and English differ in their perception of emotional valence and arousal, largely due to cross-cultural variations associated with ecological, sociopolitical, and religious factors. This dataset will be a valuable resource for research examining the impact of emotional and sensory information on Chinese lexical processing, as well as for bilingual research investigating the interplay between language and emotion across different cultural contexts.

通过语言传达的情感感知受到从社会互动中获得的具身经验的影响,这在不同的文化中可能有所不同。为了探究中英两种语言的情感感知的跨文化差异,本研究收集了322名母语为汉语的人对Warriner等人(行为研究方法,45,1191-1207,2013)翻译的4923个中文单词的效价和唤醒规范。此外,还收集了每个单词的感官体验评分。分析表明,该数据集的可靠性令人满意,与以前的数据集进行了比较。我们检查了整个数据集的效价和唤醒分布,以及积极和消极情绪类别。进一步的分析表明,效价、唤醒和感觉体验与各种心理语言学变量相关,包括音节数、笔画数、可想象性、熟悉度、具体性、频率和习得年龄。跨语言比较表明,母语为汉语和英语的人对情绪效价和情绪唤起的感知存在差异,这主要是由于与生态、社会政治和宗教因素相关的跨文化差异。该数据集将为研究情感和感官信息对汉语词汇加工的影响以及研究不同文化背景下语言和情感之间相互作用的双语研究提供宝贵的资源。
{"title":"Perception of emotion across cultures: Norms of valence, arousal, and sensory experience for 4923 Chinese words translated from English in Warriner et al. (2013).","authors":"Wei Yi, Haitao Xu, Kaiwen Man","doi":"10.3758/s13428-024-02580-4","DOIUrl":"10.3758/s13428-024-02580-4","url":null,"abstract":"<p><p>Perception of emotion conveyed through language is influenced by embodied experiences obtained from social interactions, which may vary across different cultures. To explore cross-cultural differences in the perception of emotion between Chinese and English speakers, this study collected norms of valence and arousal from 322 native Mandarin speakers for 4923 Chinese words translated from Warriner et al., (Behavior Research Methods, 45, 1191-1207, 2013). Additionally, sensory experience ratings for each word were collected. Analysis demonstrated that the reliability of this dataset is satisfactory, as indicated by comparisons with previous datasets. We examined the distributions of valence and arousal for the entire dataset, as well as for positive and negative emotion categories. Further analysis suggested that valence, arousal, and sensory experience correlated with various psycholinguistic variables, including the number of syllables, number of strokes, imageability, familiarity, concreteness, frequency, and age of acquisition. Cross-language comparison indicated that native speakers of Chinese and English differ in their perception of emotional valence and arousal, largely due to cross-cultural variations associated with ecological, sociopolitical, and religious factors. This dataset will be a valuable resource for research examining the impact of emotional and sensory information on Chinese lexical processing, as well as for bilingual research investigating the interplay between language and emotion across different cultural contexts.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"43"},"PeriodicalIF":4.6,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142926336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large language models can segment narrative events similarly to humans. 大型语言模型可以像人类一样分割叙事事件。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-01-03 DOI: 10.3758/s13428-024-02569-z
Sebastian Michelmann, Manoj Kumar, Kenneth A Norman, Mariya Toneva

Humans perceive discrete events such as "restaurant visits" and "train rides" in their continuous experience. One important prerequisite for studying human event perception is the ability of researchers to quantify when one event ends and another begins. Typically, this information is derived by aggregating behavioral annotations from several observers. Here, we present an alternative computational approach where event boundaries are derived using a large language model, GPT-3, instead of using human annotations. We demonstrate that GPT-3 can segment continuous narrative text into events. GPT-3-annotated events are significantly correlated with human event annotations. Furthermore, these GPT-derived annotations achieve a good approximation of the "consensus" solution (obtained by averaging across human annotations); the boundaries identified by GPT-3 are closer to the consensus, on average, than boundaries identified by individual human annotators. This finding suggests that GPT-3 provides a feasible solution for automated event annotations, and it demonstrates a further parallel between human cognition and prediction in large language models. In the future, GPT-3 may thereby help to elucidate the principles underlying human event perception.

人类在连续的经历中感知离散的事件,如“去餐馆吃饭”和“坐火车”。研究人类事件感知的一个重要前提是研究人员有能力量化一个事件何时结束,另一个事件何时开始。通常,这些信息是通过聚合来自多个观察者的行为注释得来的。在这里,我们提出了一种替代计算方法,其中使用大型语言模型GPT-3派生事件边界,而不是使用人工注释。我们证明了GPT-3可以将连续的叙事文本分割成事件。gpt -3注释的事件与人类事件注释显著相关。此外,这些gpt衍生的注释实现了“共识”解决方案的良好近似(通过对人类注释进行平均获得);平均而言,GPT-3确定的边界比单个人类注释者确定的边界更接近共识。这一发现表明GPT-3为自动事件注释提供了一种可行的解决方案,并进一步证明了在大型语言模型中人类认知和预测之间的平行关系。在未来,GPT-3可能因此有助于阐明人类事件感知的基本原则。
{"title":"Large language models can segment narrative events similarly to humans.","authors":"Sebastian Michelmann, Manoj Kumar, Kenneth A Norman, Mariya Toneva","doi":"10.3758/s13428-024-02569-z","DOIUrl":"10.3758/s13428-024-02569-z","url":null,"abstract":"<p><p>Humans perceive discrete events such as \"restaurant visits\" and \"train rides\" in their continuous experience. One important prerequisite for studying human event perception is the ability of researchers to quantify when one event ends and another begins. Typically, this information is derived by aggregating behavioral annotations from several observers. Here, we present an alternative computational approach where event boundaries are derived using a large language model, GPT-3, instead of using human annotations. We demonstrate that GPT-3 can segment continuous narrative text into events. GPT-3-annotated events are significantly correlated with human event annotations. Furthermore, these GPT-derived annotations achieve a good approximation of the \"consensus\" solution (obtained by averaging across human annotations); the boundaries identified by GPT-3 are closer to the consensus, on average, than boundaries identified by individual human annotators. This finding suggests that GPT-3 provides a feasible solution for automated event annotations, and it demonstrates a further parallel between human cognition and prediction in large language models. In the future, GPT-3 may thereby help to elucidate the principles underlying human event perception.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"39"},"PeriodicalIF":4.6,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11810054/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142920531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Visual Integration of Semantic and Spatial Information of Objects in Naturalistic Scenes (VISIONS) database: attentional, conceptual, and perceptual norms. 自然场景数据库中物体语义和空间信息的视觉整合:注意、概念和知觉规范。
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-01-03 DOI: 10.3758/s13428-024-02535-9
Elena Allegretti, Giorgia D'Innocenzo, Moreno I Coco

The complex interplay between low- and high-level mechanisms governing our visual system can only be fully understood within ecologically valid naturalistic contexts. For this reason, in recent years, substantial efforts have been devoted to equipping the scientific community with datasets of realistic images normed on semantic or spatial features. Here, we introduce VISIONS, an extensive database of 1136 naturalistic scenes normed on a wide range of perceptual and conceptual norms by 185 English speakers across three levels of granularity: isolated object, whole scene, and object-in-scene. Each naturalistic scene contains a critical object systematically manipulated and normed regarding its semantic consistency (e.g., a toothbrush vs. a flashlight in a bathroom) and spatial position (i.e., left, right). Normative data are also available for low- (i.e., clarity, visual complexity) and high-level (i.e., name agreement, confidence, familiarity, prototypicality, manipulability) features of the critical object and its embedding scene context. Eye-tracking data during a free-viewing task further confirms the experimental validity of our manipulations while theoretically demonstrating that object semantics is acquired in extra-foveal vision and used to guide early overt attention. To our knowledge, VISIONS is the first database exhaustively covering norms about integrating objects in scenes and providing several perceptual and conceptual norms of the two as independently taken. We expect VISIONS to become an invaluable image dataset to examine and answer timely questions above and beyond vision science, where a diversity of perceptual, attentive, mnemonic, or linguistic processes could be explored as they develop, age, or become neuropathological.

控制我们视觉系统的低级和高级机制之间复杂的相互作用,只有在生态有效的自然主义背景下才能完全理解。出于这个原因,近年来,科学界一直致力于为语义或空间特征规范的真实图像数据集提供装备。在这里,我们介绍VISIONS,这是一个广泛的数据库,包含1136个自然场景,由185名英语使用者在三个粒度级别上规范了广泛的感知和概念规范:孤立的物体、整个场景和场景中的物体。每个自然场景都包含一个关键对象,系统地操纵和规范其语义一致性(例如,牙刷和浴室里的手电筒)和空间位置(例如,左,右)。规范性数据也可用于关键对象及其嵌入场景上下文的低(即清晰度,视觉复杂性)和高(即名称一致性,置信度,熟悉度,原型性,可操作性)特征。在自由观看任务中的眼动追踪数据进一步证实了我们操作的实验有效性,同时从理论上证明了物体语义是在中央凹外视觉中获得的,并用于指导早期的显性注意力。据我们所知,VISIONS是第一个详尽地涵盖场景中物体整合规范的数据库,并提供了两个独立的感知和概念规范。我们期望VISIONS成为一个宝贵的图像数据集,用于检查和回答视觉科学以外的及时问题,在视觉科学中,随着感知、注意、记忆或语言过程的发展、年龄或神经病理学的发展,可以探索这些过程的多样性。
{"title":"The Visual Integration of Semantic and Spatial Information of Objects in Naturalistic Scenes (VISIONS) database: attentional, conceptual, and perceptual norms.","authors":"Elena Allegretti, Giorgia D'Innocenzo, Moreno I Coco","doi":"10.3758/s13428-024-02535-9","DOIUrl":"10.3758/s13428-024-02535-9","url":null,"abstract":"<p><p>The complex interplay between low- and high-level mechanisms governing our visual system can only be fully understood within ecologically valid naturalistic contexts. For this reason, in recent years, substantial efforts have been devoted to equipping the scientific community with datasets of realistic images normed on semantic or spatial features. Here, we introduce VISIONS, an extensive database of 1136 naturalistic scenes normed on a wide range of perceptual and conceptual norms by 185 English speakers across three levels of granularity: isolated object, whole scene, and object-in-scene. Each naturalistic scene contains a critical object systematically manipulated and normed regarding its semantic consistency (e.g., a toothbrush vs. a flashlight in a bathroom) and spatial position (i.e., left, right). Normative data are also available for low- (i.e., clarity, visual complexity) and high-level (i.e., name agreement, confidence, familiarity, prototypicality, manipulability) features of the critical object and its embedding scene context. Eye-tracking data during a free-viewing task further confirms the experimental validity of our manipulations while theoretically demonstrating that object semantics is acquired in extra-foveal vision and used to guide early overt attention. To our knowledge, VISIONS is the first database exhaustively covering norms about integrating objects in scenes and providing several perceptual and conceptual norms of the two as independently taken. We expect VISIONS to become an invaluable image dataset to examine and answer timely questions above and beyond vision science, where a diversity of perceptual, attentive, mnemonic, or linguistic processes could be explored as they develop, age, or become neuropathological.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"42"},"PeriodicalIF":4.6,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142926337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Behavior Research Methods
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1