首页 > 最新文献

Behavior Research Methods最新文献

英文 中文
A large-scale, gamified online assessment of first impressions: The Who Knows project.
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-02-03 DOI: 10.3758/s13428-025-02601-w
Richard Rau, Michael P Grosz, Mitja D Back

Interpersonal judgments play a central role in human social interactions, influencing decisions ranging from friendships to presidential elections. Despite extensive research on the accuracy of these judgments, an overreliance on broad personality traits and subjective judgments as criteria for accuracy has hindered progress in this area. Further, most individuals involved in past studies (either as judges or targets) came from ad-hoc student samples which hampers generalizability. This paper introduces Who Knows ( https://whoknows.uni-muenster.de ), an innovative smartphone application designed to address these limitations. Who Knows was developed with the aim to create a comprehensive and reliable database for examining first impressions. It utilizes a gamified approach where users judge personality-related characteristics of strangers based on short video introductions. The project incorporates multifaceted criteria to evaluate judgments, going beyond traditional self-other agreement. Additionally, the app draws on a large pool of highly specific and heterogenous items and allows users to judge a diverse array of targets on their smartphones. The app's design prioritizes user engagement through a responsive interface, feedback mechanisms, and gamification elements, enhancing their motivation to provide judgments. The Who Knows project is ongoing and promises to shed new light on interpersonal perception by offering a vast dataset with diverse items and a large number of participants (as of fall 2024, N = 9,671 users). Researchers are encouraged to access this resource for a wide range of empirical inquiries and to contribute to the project by submitting items or software features to be included in future versions of the app.

{"title":"A large-scale, gamified online assessment of first impressions: The Who Knows project.","authors":"Richard Rau, Michael P Grosz, Mitja D Back","doi":"10.3758/s13428-025-02601-w","DOIUrl":"10.3758/s13428-025-02601-w","url":null,"abstract":"<p><p>Interpersonal judgments play a central role in human social interactions, influencing decisions ranging from friendships to presidential elections. Despite extensive research on the accuracy of these judgments, an overreliance on broad personality traits and subjective judgments as criteria for accuracy has hindered progress in this area. Further, most individuals involved in past studies (either as judges or targets) came from ad-hoc student samples which hampers generalizability. This paper introduces Who Knows ( https://whoknows.uni-muenster.de ), an innovative smartphone application designed to address these limitations. Who Knows was developed with the aim to create a comprehensive and reliable database for examining first impressions. It utilizes a gamified approach where users judge personality-related characteristics of strangers based on short video introductions. The project incorporates multifaceted criteria to evaluate judgments, going beyond traditional self-other agreement. Additionally, the app draws on a large pool of highly specific and heterogenous items and allows users to judge a diverse array of targets on their smartphones. The app's design prioritizes user engagement through a responsive interface, feedback mechanisms, and gamification elements, enhancing their motivation to provide judgments. The Who Knows project is ongoing and promises to shed new light on interpersonal perception by offering a vast dataset with diverse items and a large number of participants (as of fall 2024, N = 9,671 users). Researchers are encouraged to access this resource for a wide range of empirical inquiries and to contribute to the project by submitting items or software features to be included in future versions of the app.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 3","pages":"83"},"PeriodicalIF":4.6,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11790696/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143121717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Viewing mock crimes in virtual reality increases presence without impacting memory.
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-02-03 DOI: 10.3758/s13428-024-02575-1
Andrew D Green, Andrew Clark, Melanie Pitchford, Andy Guppy

Traditional methods of displaying stimuli in eyewitness memory research, such as mock crime videos, are often criticised for lacking ecological validity. To overcome this, researchers have suggested using virtual reality (VR) technology to display the stimuli as it can promote a sense of presence, leading to real-world responses. However, little research has compared VR with traditional methods to demonstrate this enhanced validity. In Study 1, 54 participants viewed a mock crime video on screen or in VR while their heart rate was recorded, then completed measures of presence and emotion, and had their recall tested after 10 min. In Study 2, 74 participants' recall was tested after a 7-day delay and included a more in-depth exploration of emotional experience. In both studies, participants in the VR group reported a statistically significant increase in their sense of general presence, spatial presence, and involvement in the scene; however, there was no statistically significant difference in recall between the groups. Participants in the VR group had a statistically significant increase in heart rate in Study 1 only, and emotional experience in Study 2 only. The findings of this research suggest that VR may provide a more ecologically valid eyewitness experience than videos, without impacting participant memory or wellbeing. The findings of the current research are discussed in relation to previous literature and implications for experimental eyewitness memory research.

{"title":"Viewing mock crimes in virtual reality increases presence without impacting memory.","authors":"Andrew D Green, Andrew Clark, Melanie Pitchford, Andy Guppy","doi":"10.3758/s13428-024-02575-1","DOIUrl":"10.3758/s13428-024-02575-1","url":null,"abstract":"<p><p>Traditional methods of displaying stimuli in eyewitness memory research, such as mock crime videos, are often criticised for lacking ecological validity. To overcome this, researchers have suggested using virtual reality (VR) technology to display the stimuli as it can promote a sense of presence, leading to real-world responses. However, little research has compared VR with traditional methods to demonstrate this enhanced validity. In Study 1, 54 participants viewed a mock crime video on screen or in VR while their heart rate was recorded, then completed measures of presence and emotion, and had their recall tested after 10 min. In Study 2, 74 participants' recall was tested after a 7-day delay and included a more in-depth exploration of emotional experience. In both studies, participants in the VR group reported a statistically significant increase in their sense of general presence, spatial presence, and involvement in the scene; however, there was no statistically significant difference in recall between the groups. Participants in the VR group had a statistically significant increase in heart rate in Study 1 only, and emotional experience in Study 2 only. The findings of this research suggest that VR may provide a more ecologically valid eyewitness experience than videos, without impacting participant memory or wellbeing. The findings of the current research are discussed in relation to previous literature and implications for experimental eyewitness memory research.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 3","pages":"88"},"PeriodicalIF":4.6,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11790719/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143121740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introducing the Sisu Voice Matching Test (SVMT): A novel tool for assessing voice discrimination in Chinese.
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-02-03 DOI: 10.3758/s13428-025-02608-3
Tianze Xu, Xiaoming Jiang, Peng Zhang, Anni Wang

Existing standardized tests for voice discrimination are based mainly on Indo-European languages, particularly English. However, voice identity perception is influenced by language familiarity, with listeners generally performing better in their native language than in a foreign one. To provide a more accurate and comprehensive assessment of voice discrimination, it is crucial to develop tests tailored to the native language of the test takers. In response, we developed the Sisu Voice Matching Test (SVMT), a pioneering tool designed specifically for Mandarin Chinese speakers. The SVMT was designed to model real-world communication since it includes both pseudo-word and pseudo-sentence stimuli and covers both the ability to categorize identical voices as the same and the ability to categorize distinct voices as different. Built on a neurally validated voice-space model and item response theory, the SVMT ensures high reliability, validity, appropriate difficulty, and strong discriminative power, while maintaining a concise test duration of approximately 10 min. Therefore, by taking into account the effects of language nativeness, the SVMT complements existing voice tests based on other languages' phonologies to provide a more accurate assessment of voice discrimination ability for Mandarin Chinese speakers. Future research can use the SVMT to deepen our understanding of the mechanisms underlying human voice identity perception, especially in special populations, and to examining the relationship between voice identity recognition and other cognitive processes.

{"title":"Introducing the Sisu Voice Matching Test (SVMT): A novel tool for assessing voice discrimination in Chinese.","authors":"Tianze Xu, Xiaoming Jiang, Peng Zhang, Anni Wang","doi":"10.3758/s13428-025-02608-3","DOIUrl":"https://doi.org/10.3758/s13428-025-02608-3","url":null,"abstract":"<p><p>Existing standardized tests for voice discrimination are based mainly on Indo-European languages, particularly English. However, voice identity perception is influenced by language familiarity, with listeners generally performing better in their native language than in a foreign one. To provide a more accurate and comprehensive assessment of voice discrimination, it is crucial to develop tests tailored to the native language of the test takers. In response, we developed the Sisu Voice Matching Test (SVMT), a pioneering tool designed specifically for Mandarin Chinese speakers. The SVMT was designed to model real-world communication since it includes both pseudo-word and pseudo-sentence stimuli and covers both the ability to categorize identical voices as the same and the ability to categorize distinct voices as different. Built on a neurally validated voice-space model and item response theory, the SVMT ensures high reliability, validity, appropriate difficulty, and strong discriminative power, while maintaining a concise test duration of approximately 10 min. Therefore, by taking into account the effects of language nativeness, the SVMT complements existing voice tests based on other languages' phonologies to provide a more accurate assessment of voice discrimination ability for Mandarin Chinese speakers. Future research can use the SVMT to deepen our understanding of the mechanisms underlying human voice identity perception, especially in special populations, and to examining the relationship between voice identity recognition and other cognitive processes.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 3","pages":"86"},"PeriodicalIF":4.6,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143121735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computerized continuous scoring of the cognitive style figure test: Embedded figure test as an example. 认知风格图形测试的计算机连续评分:以嵌入式图形测试为例
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-02-03 DOI: 10.3758/s13428-024-02559-1
Meng Ye, Jingyi Li

Extensive research has shown that cognitive style is a non-negligible potential influencer of domains of human functioning, such as learning, creativity, and cooperation among individuals. However, the dichotomy of cognitive style is contradictory to the fact that cognitive style is a continuous variable, and the dichotomy loses information about the strength of people's performance between the poles of cognitive style. To solve this problem, this study developed a computerized continuous scoring system (CCS) based on Python's OpenCV library, and achieved continuous scoring of the test of cognitive style, with the Embedded Figure Test as an example. An empirical study was implemented to compare the performance of dichotomous scoring and CCS. The results show that CCS can accurately extract the traces of participants' responses and achieve continuous scoring, supplementing the information on the strength of people's cognitive styles between the two poles, and the performance of CCS-based tests such as discrimination, reliability, and validity are significantly improved compared with the dichotomous scoring. Given the high reproducibility of CCS, it is expected to be applied to scoring other continuity characteristics in the future.

大量研究表明,认知风格对学习、创造力和人与人之间的合作等人类功能领域有着不可忽视的潜在影响。然而,认知风格的二分法与认知风格是一个连续变量的事实相矛盾,二分法失去了人们在认知风格两极之间表现强度的信息。为解决这一问题,本研究基于 Python 的 OpenCV 库开发了计算机连续计分系统(CCS),并以嵌入式图形测试为例,实现了认知风格测试的连续计分。通过实证研究,比较了二分计分法和 CCS 的性能。结果表明,CCS能准确提取被试的反应痕迹并实现连续计分,在两极之间补充了人们认知风格强弱的信息,与二分法计分相比,基于CCS的测验在区分度、信度和效度等方面的表现都有显著提高。由于 CCS 具有较高的可重复性,因此有望在未来应用于其他连续性特征的评分。
{"title":"Computerized continuous scoring of the cognitive style figure test: Embedded figure test as an example.","authors":"Meng Ye, Jingyi Li","doi":"10.3758/s13428-024-02559-1","DOIUrl":"https://doi.org/10.3758/s13428-024-02559-1","url":null,"abstract":"<p><p>Extensive research has shown that cognitive style is a non-negligible potential influencer of domains of human functioning, such as learning, creativity, and cooperation among individuals. However, the dichotomy of cognitive style is contradictory to the fact that cognitive style is a continuous variable, and the dichotomy loses information about the strength of people's performance between the poles of cognitive style. To solve this problem, this study developed a computerized continuous scoring system (CCS) based on Python's OpenCV library, and achieved continuous scoring of the test of cognitive style, with the Embedded Figure Test as an example. An empirical study was implemented to compare the performance of dichotomous scoring and CCS. The results show that CCS can accurately extract the traces of participants' responses and achieve continuous scoring, supplementing the information on the strength of people's cognitive styles between the two poles, and the performance of CCS-based tests such as discrimination, reliability, and validity are significantly improved compared with the dichotomous scoring. Given the high reproducibility of CCS, it is expected to be applied to scoring other continuity characteristics in the future.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 3","pages":"84"},"PeriodicalIF":4.6,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143121718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-validation and predictive metrics in psychological research: Do not leave out the leave-one-out.
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-02-03 DOI: 10.3758/s13428-024-02588-w
Diego Iglesias, Miguel A Sorrel, Ricardo Olmos

There is growing interest in integrating explanatory and predictive research practices in psychological research. For this integration to be successful, the psychologist's toolkit must incorporate standard procedures that enable a direct estimation of the prediction error, such as cross-validation (CV). Despite their apparent simplicity, CV methods are intricate, and thus it is crucial to adapt them to specific contexts and predictive metrics. This study delves into the performance of different CV methods in estimating the prediction error in the R 2 and MSE metrics in regression analysis, ubiquitous in psychological research. Current approaches, which rely on the 5- or 10-fold rule of thumb or on the squared correlation between predicted and observed values, present limitations when computing the prediction error in the R 2 metric, a widely used statistic in the behavioral sciences. We propose the use of an alternative method that overcomes these limitations and enables the computation of the leave-one-out (LOO) in the R 2 metric. Through two Monte Carlo simulation studies and the application of CV to the data from the Many Labs Replication Project, we show that the LOO consistently has the best performance. The CV methods discussed in the present study have been implemented in the R package OutR2.

{"title":"Cross-validation and predictive metrics in psychological research: Do not leave out the leave-one-out.","authors":"Diego Iglesias, Miguel A Sorrel, Ricardo Olmos","doi":"10.3758/s13428-024-02588-w","DOIUrl":"https://doi.org/10.3758/s13428-024-02588-w","url":null,"abstract":"<p><p>There is growing interest in integrating explanatory and predictive research practices in psychological research. For this integration to be successful, the psychologist's toolkit must incorporate standard procedures that enable a direct estimation of the prediction error, such as cross-validation (CV). Despite their apparent simplicity, CV methods are intricate, and thus it is crucial to adapt them to specific contexts and predictive metrics. This study delves into the performance of different CV methods in estimating the prediction error in the <math> <msup><mrow><mi>R</mi></mrow> <mn>2</mn></msup> </math> and <math><mtext>MSE</mtext></math> metrics in regression analysis, ubiquitous in psychological research. Current approaches, which rely on the 5- or 10-fold rule of thumb or on the squared correlation between predicted and observed values, present limitations when computing the prediction error in the <math> <msup><mrow><mi>R</mi></mrow> <mn>2</mn></msup> </math> metric, a widely used statistic in the behavioral sciences. We propose the use of an alternative method that overcomes these limitations and enables the computation of the leave-one-out (LOO) in the <math> <msup><mrow><mi>R</mi></mrow> <mn>2</mn></msup> </math> metric. Through two Monte Carlo simulation studies and the application of CV to the data from the Many Labs Replication Project, we show that the LOO consistently has the best performance. The CV methods discussed in the present study have been implemented in the R package OutR2.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 3","pages":"85"},"PeriodicalIF":4.6,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143121719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Examination of nonlinear longitudinal processes with latent variables, latent processes, latent changes, and latent classes in the structural equation modeling framework: The R package nlpsem.
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-02-03 DOI: 10.3758/s13428-025-02596-4
Jin Liu

We introduce the R package nlpsem (Liu, 2023), a comprehensive toolkit for analyzing longitudinal processes within the structural equation modeling (SEM) framework, incorporating individual measurement occasions. This package emphasizes nonlinear longitudinal models, especially intrinsic ones, across four key scenarios: (1) univariate longitudinal processes with latent variables, optionally including covariates such as time-invariant covariates (TICs) and time-varying covariates (TVCs); (2) multivariate longitudinal analyses to explore correlations or unidirectional relationships between longitudinal variables; (3) multiple-group frameworks for comparing manifest classes in scenarios (1) and (2); and (4) mixture models for scenarios (1) and (2), accommodating latent class heterogeneity. Built on the OpenMx R package, nlpsem supports flexible model designs and uses the full information maximum likelihood method for parameter estimation. A notable feature is its algorithm for determining initial values directly from raw data, improving computational efficiency and convergence. Furthermore, nlpsem provides tools for goodness-of-fit tests, cluster analyses, visualization, derivation of p values and three types of confidence intervals, as well as model selection for nested models using likelihood-ratio tests and for non-nested models based on criteria such as Akaike information criterion and Bayesian information criterion. This article serves as a companion document to the nlpsem R package, providing a comprehensive guide to its modeling capabilities, estimation methods, implementation features, and application examples using synthetic intelligence growth data.

{"title":"Examination of nonlinear longitudinal processes with latent variables, latent processes, latent changes, and latent classes in the structural equation modeling framework: The R package nlpsem.","authors":"Jin Liu","doi":"10.3758/s13428-025-02596-4","DOIUrl":"https://doi.org/10.3758/s13428-025-02596-4","url":null,"abstract":"<p><p>We introduce the R package nlpsem (Liu, 2023), a comprehensive toolkit for analyzing longitudinal processes within the structural equation modeling (SEM) framework, incorporating individual measurement occasions. This package emphasizes nonlinear longitudinal models, especially intrinsic ones, across four key scenarios: (1) univariate longitudinal processes with latent variables, optionally including covariates such as time-invariant covariates (TICs) and time-varying covariates (TVCs); (2) multivariate longitudinal analyses to explore correlations or unidirectional relationships between longitudinal variables; (3) multiple-group frameworks for comparing manifest classes in scenarios (1) and (2); and (4) mixture models for scenarios (1) and (2), accommodating latent class heterogeneity. Built on the OpenMx R package, nlpsem supports flexible model designs and uses the full information maximum likelihood method for parameter estimation. A notable feature is its algorithm for determining initial values directly from raw data, improving computational efficiency and convergence. Furthermore, nlpsem provides tools for goodness-of-fit tests, cluster analyses, visualization, derivation of p values and three types of confidence intervals, as well as model selection for nested models using likelihood-ratio tests and for non-nested models based on criteria such as Akaike information criterion and Bayesian information criterion. This article serves as a companion document to the nlpsem R package, providing a comprehensive guide to its modeling capabilities, estimation methods, implementation features, and application examples using synthetic intelligence growth data.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 3","pages":"87"},"PeriodicalIF":4.6,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143121723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Mandarin Chinese auditory emotions stimulus database: A validated corpus of monosyllabic Chinese characters.
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-02-03 DOI: 10.3758/s13428-025-02607-4
Mengyuan Li, Na Li, Anqi Zhou, Huiru Yan, Qiuhong Li, Chifen Ma, Chao Wu

Auditory emotional rhythm can be transmitted by simple syllables. This study aimed to establish and validate an auditory speech dataset containing Mandarin Chinese auditory emotional monosyllables (MCAE-Monosyllable), a resource that has not been previously available. A total of 422 Chinese monosyllables were recorded by six professional Mandarin actors, each expressing seven emotions: neutral, happy, angry, sad, fearful, disgusted, and surprised. Additionally, each neutral voice was recorded in four Chinese tones. After standardization and energy balance, the recordings were evaluated by 720 Chinese college students for emotional categories (forced to choose one out of seven emotions) and emotional intensity (rated on a scale of 1-9). The final dataset consists of 18,089 valid Chinese monosyllabic pronunciations (neutrality: 9425, sadness: 2453, anger: 2024; surprise: 1699, disgust: 1624, happiness: 590, fear: 274). On average, neutrality had the highest accuracy rate (79%), followed by anger (75%) and sadness (75%), surprise (74%), happiness (73%), disgust (72%), and finally fear (67%). We provided detailed validation results, acoustic information, and perceptual intensity rating values for each sound. The MCAE-Monosyllable database serves as a valuable resource for neural decoding of Chinese emotional speech, cross-cultural language research, and behavioral or clinical studies related to language and emotional disorders. The database can be obtained within the Open Science Framework ( https://osf.io/h3uem/?view_only=047dfd08dbb64ad0882410da340aa271 ).

{"title":"The Mandarin Chinese auditory emotions stimulus database: A validated corpus of monosyllabic Chinese characters.","authors":"Mengyuan Li, Na Li, Anqi Zhou, Huiru Yan, Qiuhong Li, Chifen Ma, Chao Wu","doi":"10.3758/s13428-025-02607-4","DOIUrl":"https://doi.org/10.3758/s13428-025-02607-4","url":null,"abstract":"<p><p>Auditory emotional rhythm can be transmitted by simple syllables. This study aimed to establish and validate an auditory speech dataset containing Mandarin Chinese auditory emotional monosyllables (MCAE-Monosyllable), a resource that has not been previously available. A total of 422 Chinese monosyllables were recorded by six professional Mandarin actors, each expressing seven emotions: neutral, happy, angry, sad, fearful, disgusted, and surprised. Additionally, each neutral voice was recorded in four Chinese tones. After standardization and energy balance, the recordings were evaluated by 720 Chinese college students for emotional categories (forced to choose one out of seven emotions) and emotional intensity (rated on a scale of 1-9). The final dataset consists of 18,089 valid Chinese monosyllabic pronunciations (neutrality: 9425, sadness: 2453, anger: 2024; surprise: 1699, disgust: 1624, happiness: 590, fear: 274). On average, neutrality had the highest accuracy rate (79%), followed by anger (75%) and sadness (75%), surprise (74%), happiness (73%), disgust (72%), and finally fear (67%). We provided detailed validation results, acoustic information, and perceptual intensity rating values for each sound. The MCAE-Monosyllable database serves as a valuable resource for neural decoding of Chinese emotional speech, cross-cultural language research, and behavioral or clinical studies related to language and emotional disorders. The database can be obtained within the Open Science Framework ( https://osf.io/h3uem/?view_only=047dfd08dbb64ad0882410da340aa271 ).</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 3","pages":"89"},"PeriodicalIF":4.6,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143121739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Are we capturing individual differences? Evaluating the test-retest reliability of experimental tasks used to measure social cognitive abilities.
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-01-31 DOI: 10.3758/s13428-025-02606-5
Charlotte R Pennington, Kayley Birch-Hurst, Matthew Ploszajski, Kait Clark, Craig Hedge, Daniel J Shaw

Social cognitive skills are crucial for positive interpersonal relationships, health, and wellbeing and encompass both automatic and reflexive processes. To assess this myriad of skills, researchers have developed numerous experimental tasks that measure automatic imitation, emotion recognition, empathy, perspective taking, and intergroup bias and have used these to reveal important individual differences in social cognition. However, the very reason these tasks produce robust experimental effects - low between-participant variability - can make their use as correlational tools problematic. We performed an evaluation of test-retest reliability for common experimental tasks that measure social cognition. One-hundred and fifty participants completed the race-Implicit Association Test (r-IAT), Stimulus-Response Compatibility (SRC) task, Emotional Go/No-Go (eGNG) task, Dot Perspective-Taking (DPT) task, and State Affective Empathy (SAE) task, as well as the Interpersonal Reactivity Index (IRI) and indices of Explicit Bias (EB) across two sessions within 3 weeks. Estimates of test-retest reliability varied considerably between tasks and their indices: the eGNG task had good reliability (ICC = 0.63-0.69); the SAE task had moderate-to-good reliability (ICC = 0.56-0.77); the r-IAT had moderate reliability (ICC = 0.49); the DPT task had poor-to-good reliability (ICC = 0.24-0.60); and the SRC task had poor reliability (ICC = 0.09-0.29). The IRI had good-to-excellent reliability (ICC = 0.76-0.83) and EB had good reliability (ICC = 0.70-0.77). Experimental tasks of social cognition are used routinely to assess individual differences, but their suitability for this is rarely evaluated. Researchers investigating individual differences must assess the test-retest reliability of their measures.

{"title":"Are we capturing individual differences? Evaluating the test-retest reliability of experimental tasks used to measure social cognitive abilities.","authors":"Charlotte R Pennington, Kayley Birch-Hurst, Matthew Ploszajski, Kait Clark, Craig Hedge, Daniel J Shaw","doi":"10.3758/s13428-025-02606-5","DOIUrl":"10.3758/s13428-025-02606-5","url":null,"abstract":"<p><p>Social cognitive skills are crucial for positive interpersonal relationships, health, and wellbeing and encompass both automatic and reflexive processes. To assess this myriad of skills, researchers have developed numerous experimental tasks that measure automatic imitation, emotion recognition, empathy, perspective taking, and intergroup bias and have used these to reveal important individual differences in social cognition. However, the very reason these tasks produce robust experimental effects - low between-participant variability - can make their use as correlational tools problematic. We performed an evaluation of test-retest reliability for common experimental tasks that measure social cognition. One-hundred and fifty participants completed the race-Implicit Association Test (r-IAT), Stimulus-Response Compatibility (SRC) task, Emotional Go/No-Go (eGNG) task, Dot Perspective-Taking (DPT) task, and State Affective Empathy (SAE) task, as well as the Interpersonal Reactivity Index (IRI) and indices of Explicit Bias (EB) across two sessions within 3 weeks. Estimates of test-retest reliability varied considerably between tasks and their indices: the eGNG task had good reliability (ICC = 0.63-0.69); the SAE task had moderate-to-good reliability (ICC = 0.56-0.77); the r-IAT had moderate reliability (ICC = 0.49); the DPT task had poor-to-good reliability (ICC = 0.24-0.60); and the SRC task had poor reliability (ICC = 0.09-0.29). The IRI had good-to-excellent reliability (ICC = 0.76-0.83) and EB had good reliability (ICC = 0.70-0.77). Experimental tasks of social cognition are used routinely to assess individual differences, but their suitability for this is rarely evaluated. Researchers investigating individual differences must assess the test-retest reliability of their measures.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 2","pages":"82"},"PeriodicalIF":4.6,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11785611/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143073572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Emotional valence, cloze probability, and entropy: Completion norms for 403 French sentences.
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-01-31 DOI: 10.3758/s13428-025-02604-7
Jérémy Brunel, Emilie Dujardin, Sandrine Delord, Stéphanie Mathey

Sentence-final completion norms are a useful way to select materials in the study of psycholinguistics, neurosciences, and language processing. In recent decades, the literature has focused on measuring cloze probability and sentence constraint indexes to account for various contextual expectation effects. However, the emotional content of target words is another factor that may affect word prediction and has not yet been examined. The purpose of the present study was to design a French corpus of sentence completion norms for final words varying in both valence and arousal. A total of 1322 young adults participated in an online written cloze procedure, in which they were asked to guess the final missing word in given sentences. At least 275 individuals evaluated each sentence. Cloze probability index was estimated for each sentence ending with a negative, neutral or positive word, as well as the level of sentence uncertainty through the calculation of sentence entropy. We also estimated the emotionality of the beginning of each sentence as complementary information with valence and arousal values of sentence-ending words. The final corpus of 403 French sentences offers a wide range of cloze predictability contexts for all emotional categories of final words. We hope that these norms may help to implement new research investigating the interplay between language and emotional processing. The collected data and norms are accessible through the Open Science Framework at the following depository link: https://osf.io/7pc46/?view_only=a1ec1c23e28a45b9951c7cecc073e1ac.

{"title":"Emotional valence, cloze probability, and entropy: Completion norms for 403 French sentences.","authors":"Jérémy Brunel, Emilie Dujardin, Sandrine Delord, Stéphanie Mathey","doi":"10.3758/s13428-025-02604-7","DOIUrl":"https://doi.org/10.3758/s13428-025-02604-7","url":null,"abstract":"<p><p>Sentence-final completion norms are a useful way to select materials in the study of psycholinguistics, neurosciences, and language processing. In recent decades, the literature has focused on measuring cloze probability and sentence constraint indexes to account for various contextual expectation effects. However, the emotional content of target words is another factor that may affect word prediction and has not yet been examined. The purpose of the present study was to design a French corpus of sentence completion norms for final words varying in both valence and arousal. A total of 1322 young adults participated in an online written cloze procedure, in which they were asked to guess the final missing word in given sentences. At least 275 individuals evaluated each sentence. Cloze probability index was estimated for each sentence ending with a negative, neutral or positive word, as well as the level of sentence uncertainty through the calculation of sentence entropy. We also estimated the emotionality of the beginning of each sentence as complementary information with valence and arousal values of sentence-ending words. The final corpus of 403 French sentences offers a wide range of cloze predictability contexts for all emotional categories of final words. We hope that these norms may help to implement new research investigating the interplay between language and emotional processing. The collected data and norms are accessible through the Open Science Framework at the following depository link: https://osf.io/7pc46/?view_only=a1ec1c23e28a45b9951c7cecc073e1ac.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 2","pages":"81"},"PeriodicalIF":4.6,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143073575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validation of an online imitation-inhibition task.
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-01-30 DOI: 10.3758/s13428-024-02557-3
Mareike Westfal, Emiel Cracco, Jan Crusius, Oliver Genschow

People automatically imitate a wide range of different behaviors. One of the most commonly used measurement methods to assess imitative behavior is the imitation-inhibition task (Brass et al., 2000). A disadvantage of its original form is, however, that it was validated for laboratory settings-a time-consuming and costly procedure. Here, we present an approach for conducting the imitation-inhibition task in online settings. We programmed the online version of the imitation-inhibition task in JavaScript and implemented the task in online survey software (i.e., Qualtrics). We validated the task in four experiments. Experiment 1 (N = 88) showed that the typical automatic imitation effects can be detected with good psychometric properties. Going one step further, Experiment 2 (N = 182) directly compared the online version of the imitation-inhibition task with its laboratory version and demonstrated that the online version produces similar strong and reliable effects. In Experiments 3 and 4, we assessed typical moderator effects that were previously reported in laboratory settings: Experiment 3 (N = 93) demonstrated that automatic imitation can be reliably detected in online settings even when controlling for spatial compatibility. Experiment 4 (N = 104) found, in line with previous research, that individuals imitate hand movements executed by a robot less strongly than movements executed by a human. Taken together, the results show that the online version of the imitation-inhibition task offers an easy-to-use method that enables the measurement of automatic imitation with common online survey software tools in a reliable and valid fashion.

{"title":"Validation of an online imitation-inhibition task.","authors":"Mareike Westfal, Emiel Cracco, Jan Crusius, Oliver Genschow","doi":"10.3758/s13428-024-02557-3","DOIUrl":"10.3758/s13428-024-02557-3","url":null,"abstract":"<p><p>People automatically imitate a wide range of different behaviors. One of the most commonly used measurement methods to assess imitative behavior is the imitation-inhibition task (Brass et al., 2000). A disadvantage of its original form is, however, that it was validated for laboratory settings-a time-consuming and costly procedure. Here, we present an approach for conducting the imitation-inhibition task in online settings. We programmed the online version of the imitation-inhibition task in JavaScript and implemented the task in online survey software (i.e., Qualtrics). We validated the task in four experiments. Experiment 1 (N = 88) showed that the typical automatic imitation effects can be detected with good psychometric properties. Going one step further, Experiment 2 (N = 182) directly compared the online version of the imitation-inhibition task with its laboratory version and demonstrated that the online version produces similar strong and reliable effects. In Experiments 3 and 4, we assessed typical moderator effects that were previously reported in laboratory settings: Experiment 3 (N = 93) demonstrated that automatic imitation can be reliably detected in online settings even when controlling for spatial compatibility. Experiment 4 (N = 104) found, in line with previous research, that individuals imitate hand movements executed by a robot less strongly than movements executed by a human. Taken together, the results show that the online version of the imitation-inhibition task offers an easy-to-use method that enables the measurement of automatic imitation with common online survey software tools in a reliable and valid fashion.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 2","pages":"80"},"PeriodicalIF":4.6,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11782408/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143063452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Behavior Research Methods
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1