首页 > 最新文献

Behavior Research Methods最新文献

英文 中文
VOC-ADO: A lexical database for French-speaking adolescents.
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-04-02 DOI: 10.3758/s13428-025-02656-9
Manuel Gimenes, Eric Lambert, Louise Chaussoy, Maximiliano A Wilson, Pauline Quémart

We present VOC-ADO, a database of the written vocabulary of French adolescents between the ages of 11 and 15 (French secondary school students). VOC-ADO provides a wealth of lexical information for 110,338 words listed in school textbooks of all disciplines (i.e., academic vocabulary), as well as novels, comics, and magazines (i.e., non-academic vocabulary). For each word, several indexes of frequency and lexical dispersion are reported, as well as word length, syntactic categories, orthographic neighborhood size, and lemma frequency. Each analysis is presented separately for the Academic and Non-academic subcorpora, as well as for the overall Global corpus. Analyses of the corpora indicate that the Academic subcorpus contains a smaller variety of unique words than the Non-academic subcorpus and exhibits higher lexical sophistication. By contrast, there is a larger proportion of content words in non-academic media than in school textbooks. Finally, VOC-ADO shows a strong frequency correlation with Manulex, a French database of elementary school vocabulary, and Lexique, a lexical database of adult vocabulary. However, many words present in VOC-ADO are not found in elementary school vocabulary. These results underscore the need to examine lexical development beyond elementary school, considering the unique characteristics of the written vocabulary encountered by French-speaking adolescents. In this regard, VOC-ADO provides researchers, educators, and clinicians interested in adolescent literacy with a valuable tool to select and analyze words based on specific characteristics. The database is freely available and can be downloaded by clicking on the following link: VOC-ADO Database link.

{"title":"VOC-ADO: A lexical database for French-speaking adolescents.","authors":"Manuel Gimenes, Eric Lambert, Louise Chaussoy, Maximiliano A Wilson, Pauline Quémart","doi":"10.3758/s13428-025-02656-9","DOIUrl":"10.3758/s13428-025-02656-9","url":null,"abstract":"<p><p>We present VOC-ADO, a database of the written vocabulary of French adolescents between the ages of 11 and 15 (French secondary school students). VOC-ADO provides a wealth of lexical information for 110,338 words listed in school textbooks of all disciplines (i.e., academic vocabulary), as well as novels, comics, and magazines (i.e., non-academic vocabulary). For each word, several indexes of frequency and lexical dispersion are reported, as well as word length, syntactic categories, orthographic neighborhood size, and lemma frequency. Each analysis is presented separately for the Academic and Non-academic subcorpora, as well as for the overall Global corpus. Analyses of the corpora indicate that the Academic subcorpus contains a smaller variety of unique words than the Non-academic subcorpus and exhibits higher lexical sophistication. By contrast, there is a larger proportion of content words in non-academic media than in school textbooks. Finally, VOC-ADO shows a strong frequency correlation with Manulex, a French database of elementary school vocabulary, and Lexique, a lexical database of adult vocabulary. However, many words present in VOC-ADO are not found in elementary school vocabulary. These results underscore the need to examine lexical development beyond elementary school, considering the unique characteristics of the written vocabulary encountered by French-speaking adolescents. In this regard, VOC-ADO provides researchers, educators, and clinicians interested in adolescent literacy with a valuable tool to select and analyze words based on specific characteristics. The database is freely available and can be downloaded by clicking on the following link: VOC-ADO Database link.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 5","pages":"137"},"PeriodicalIF":4.6,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143771224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SingleMALD: Investigating practice effects in auditory lexical decision.
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-04-02 DOI: 10.3758/s13428-025-02628-z
Filip Nenadić, Katarina Bujandrić, Matthew C Kelley, Benjamin V Tucker

We present SingleMALD, a large-scale auditory lexical decision study in English with a fully crossed design. SingleMALD is freely available and includes over 2 million trials in which 40 native speakers of English responded to over 26,000 different words and over 9000 different pseudowords, each in 67 balanced sessions. SingleMALD features a large number of responses per stimulus, but a smaller number of participants, thus complementing the Massive Auditory Lexical Decision (MALD) dataset which features many listeners but fewer responses per stimulus. In the present report, we also use SingleMALD data to explore how extensive testing affects performance in the auditory lexical decision task. SingleMALD participants show signs of favoring speed over accuracy as the sessions unfold. Additionally, we find that the relationship between participant performance and two lexical predictors - word frequency and phonological neighborhood density - changes as sessions unfold, especially for certain lexical predictor values. We note that none of the changes are drastic, indicating that data collected from participants that have been extensively tested is usable, although we recommend accounting for participant experience with the task when performing statistical analyses of the data.

我们介绍了采用完全交叉设计的大规模英语听觉词汇决策研究 SingleMALD。SingleMALD 是免费提供的,其中包括 200 多万次试验,40 位以英语为母语的人对 26,000 多个不同的单词和 9000 多个不同的假词做出了反应,每次反应都是在 67 个平衡会话中进行的。SingleMALD 的特点是每个刺激有大量的反应,但参与者人数较少,从而与大规模听觉词法决策(MALD)数据集形成了互补,后者的特点是每个刺激有许多听者,但反应较少。在本报告中,我们还使用了 SingleMALD 数据来探讨大量测试如何影响听觉词汇决策任务的表现。随着测试的进行,SingleMALD 参与者表现出了偏重速度而非准确性的迹象。此外,我们还发现,参与者的表现与两个词汇预测因子--词频和音素邻域密度--之间的关系会随着训练的展开而发生变化,尤其是在某些词汇预测因子值的情况下。我们注意到,这些变化都不是剧烈的,这表明从经过广泛测试的被试那里收集的数据是可用的,不过我们建议在对数据进行统计分析时考虑被试的任务经验。
{"title":"SingleMALD: Investigating practice effects in auditory lexical decision.","authors":"Filip Nenadić, Katarina Bujandrić, Matthew C Kelley, Benjamin V Tucker","doi":"10.3758/s13428-025-02628-z","DOIUrl":"10.3758/s13428-025-02628-z","url":null,"abstract":"<p><p>We present SingleMALD, a large-scale auditory lexical decision study in English with a fully crossed design. SingleMALD is freely available and includes over 2 million trials in which 40 native speakers of English responded to over 26,000 different words and over 9000 different pseudowords, each in 67 balanced sessions. SingleMALD features a large number of responses per stimulus, but a smaller number of participants, thus complementing the Massive Auditory Lexical Decision (MALD) dataset which features many listeners but fewer responses per stimulus. In the present report, we also use SingleMALD data to explore how extensive testing affects performance in the auditory lexical decision task. SingleMALD participants show signs of favoring speed over accuracy as the sessions unfold. Additionally, we find that the relationship between participant performance and two lexical predictors - word frequency and phonological neighborhood density - changes as sessions unfold, especially for certain lexical predictor values. We note that none of the changes are drastic, indicating that data collected from participants that have been extensively tested is usable, although we recommend accounting for participant experience with the task when performing statistical analyses of the data.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 5","pages":"136"},"PeriodicalIF":4.6,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11965236/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143771210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The simulation-cum-ROC approach: A new approach to generate tailored cutoffs for fit indices through simulation and ROC analysis.
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-04-01 DOI: 10.3758/s13428-025-02638-x
Katharina Groskurth, Nivedita Bhaktha, Clemens M Lechner

To evaluate model fit in structural equation modeling, researchers commonly compare fit indices against fixed cutoff values (e.g., CFI ≥ .950). However, methodologists have cautioned against overgeneralizing cutoffs, highlighting that cutoffs permit valid judgments of model fit only in empirical settings similar to the simulation scenarios from which these cutoffs originate. This is because fit indices are not only sensitive to misspecification but are also susceptible to various model, estimation, and data characteristics. As a solution, methodologists have proposed four principal approaches to obtain so-called tailored cutoffs, which are generated specifically for a given setting. Here, we review these approaches. We find that none of these approaches provides guidelines on which fit index (out of all fit indices of interest) is best suited for evaluating whether the model fits the data in the setting of interest. Therefore, we propose a novel approach combining a Monte Carlo simulation with receiver operating characteristic (ROC) analysis. This so-called simulation-cum-ROC approach generates tailored cutoffs and additionally identifies the most reliable fit indices in the setting of interest. We provide R code and a Shiny app for an easy implementation of the approach. No prior knowledge of Monte Carlo simulations or ROC analysis is needed to generate tailored cutoffs with the simulation-cum-ROC approach.

{"title":"The simulation-cum-ROC approach: A new approach to generate tailored cutoffs for fit indices through simulation and ROC analysis.","authors":"Katharina Groskurth, Nivedita Bhaktha, Clemens M Lechner","doi":"10.3758/s13428-025-02638-x","DOIUrl":"10.3758/s13428-025-02638-x","url":null,"abstract":"<p><p>To evaluate model fit in structural equation modeling, researchers commonly compare fit indices against fixed cutoff values (e.g., CFI ≥ .950). However, methodologists have cautioned against overgeneralizing cutoffs, highlighting that cutoffs permit valid judgments of model fit only in empirical settings similar to the simulation scenarios from which these cutoffs originate. This is because fit indices are not only sensitive to misspecification but are also susceptible to various model, estimation, and data characteristics. As a solution, methodologists have proposed four principal approaches to obtain so-called tailored cutoffs, which are generated specifically for a given setting. Here, we review these approaches. We find that none of these approaches provides guidelines on which fit index (out of all fit indices of interest) is best suited for evaluating whether the model fits the data in the setting of interest. Therefore, we propose a novel approach combining a Monte Carlo simulation with receiver operating characteristic (ROC) analysis. This so-called simulation-cum-ROC approach generates tailored cutoffs and additionally identifies the most reliable fit indices in the setting of interest. We provide R code and a Shiny app for an easy implementation of the approach. No prior knowledge of Monte Carlo simulations or ROC analysis is needed to generate tailored cutoffs with the simulation-cum-ROC approach.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 5","pages":"135"},"PeriodicalIF":4.6,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11961472/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143762959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and validation of the Interoceptive States Vocalisations (ISV) and Interoceptive States Point Light Displays (ISPLD) databases.
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-03-31 DOI: 10.3758/s13428-024-02514-0
Federica Biotti, Lily Sidnick, Anna L Hatton, Diar Abdlkarim, Alan Wing, Janet Treasure, Francesca Happé, Rebecca Brewer

The ability to perceive others' emotions and one's own interoceptive states has been the subject of extensive research. Very little work, however, has investigated the ability to recognise others' interoceptive states, such as whether an individual is feeling breathless, nauseated, or fatigued. This is likely owing to the dearth of stimuli available for use in research studies, despite the clear relevance of this ability to social interaction and effective caregiving. This paper describes the development and validation of two stimulus sets for use in research into the perception of others' interoceptive states. The Interoceptive States Vocalisations (ISV) database and the Interoceptive States Point Light Displays (ISPLD) database include 191 vocalisation and 159 point light display stimuli. Both stimulus sets underwent two phases of validation, and all stimuli were scored in terms of their quality and recognisability, using five different measures. The ISV also includes control stimuli featuring non-interoceptive vocalisations. Some interoceptive states were consistently recognised better than others, but variability was observed within, as well as between, stimulus categories. Stimuli are freely available for use in research, and are presented alongside all stimulus quality scores, in order for researchers to select the most appropriate stimuli based on individual research questions.

{"title":"Development and validation of the Interoceptive States Vocalisations (ISV) and Interoceptive States Point Light Displays (ISPLD) databases.","authors":"Federica Biotti, Lily Sidnick, Anna L Hatton, Diar Abdlkarim, Alan Wing, Janet Treasure, Francesca Happé, Rebecca Brewer","doi":"10.3758/s13428-024-02514-0","DOIUrl":"10.3758/s13428-024-02514-0","url":null,"abstract":"<p><p>The ability to perceive others' emotions and one's own interoceptive states has been the subject of extensive research. Very little work, however, has investigated the ability to recognise others' interoceptive states, such as whether an individual is feeling breathless, nauseated, or fatigued. This is likely owing to the dearth of stimuli available for use in research studies, despite the clear relevance of this ability to social interaction and effective caregiving. This paper describes the development and validation of two stimulus sets for use in research into the perception of others' interoceptive states. The Interoceptive States Vocalisations (ISV) database and the Interoceptive States Point Light Displays (ISPLD) database include 191 vocalisation and 159 point light display stimuli. Both stimulus sets underwent two phases of validation, and all stimuli were scored in terms of their quality and recognisability, using five different measures. The ISV also includes control stimuli featuring non-interoceptive vocalisations. Some interoceptive states were consistently recognised better than others, but variability was observed within, as well as between, stimulus categories. Stimuli are freely available for use in research, and are presented alongside all stimulus quality scores, in order for researchers to select the most appropriate stimuli based on individual research questions.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 5","pages":"133"},"PeriodicalIF":4.6,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11958399/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143750902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Validated Touch-Video Database.
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-03-31 DOI: 10.3758/s13428-025-02655-w
Sophie Smit, Anina N Rich

Visually observing a touch quickly reveals who is being touched, how it might feel, and the broader social or emotional context, shaping our interpretation of such interactions. Investigating these dimensions is essential for understanding how tactile experiences are processed individually and how we empathise with observed sensations in others. Here, we expand available resources for studying visually perceived touch by providing a wide-ranging set of dynamic interactions that specifically focus on the sensory qualities of touch. The Validated Touch-Video Database (VTD) consists of a set of 90 videos depicting tactile interactions with a stationary left hand, viewed from a first-person perspective. In each video, a second hand makes contact either directly (e.g., with fingers or an open palm) or using an object (e.g., a soft brush or scissors), with variations across dimensions such as hedonic qualities, arousal, threat, touch type, and the object used. Validation by 350 participants (283 women, 66 men, 1 non-binary) involved categorising the videos as 'neutral', 'pleasant', 'unpleasant', or 'painful' and rating arousal and threat levels. Our findings reveal high inter-subject agreement, with painful touch videos eliciting the highest arousal and threat ratings, while neutral touch videos serve as a baseline. Exploratory analyses indicate that women rated the videos as more threatening and painful than men, suggesting potential gender differences in the visual perception of negatively valenced touch stimuli. The VTD provides a comprehensive resource for researchers investigating the sensory and emotional dimensions of observed touch.

{"title":"The Validated Touch-Video Database.","authors":"Sophie Smit, Anina N Rich","doi":"10.3758/s13428-025-02655-w","DOIUrl":"10.3758/s13428-025-02655-w","url":null,"abstract":"<p><p>Visually observing a touch quickly reveals who is being touched, how it might feel, and the broader social or emotional context, shaping our interpretation of such interactions. Investigating these dimensions is essential for understanding how tactile experiences are processed individually and how we empathise with observed sensations in others. Here, we expand available resources for studying visually perceived touch by providing a wide-ranging set of dynamic interactions that specifically focus on the sensory qualities of touch. The Validated Touch-Video Database (VTD) consists of a set of 90 videos depicting tactile interactions with a stationary left hand, viewed from a first-person perspective. In each video, a second hand makes contact either directly (e.g., with fingers or an open palm) or using an object (e.g., a soft brush or scissors), with variations across dimensions such as hedonic qualities, arousal, threat, touch type, and the object used. Validation by 350 participants (283 women, 66 men, 1 non-binary) involved categorising the videos as 'neutral', 'pleasant', 'unpleasant', or 'painful' and rating arousal and threat levels. Our findings reveal high inter-subject agreement, with painful touch videos eliciting the highest arousal and threat ratings, while neutral touch videos serve as a baseline. Exploratory analyses indicate that women rated the videos as more threatening and painful than men, suggesting potential gender differences in the visual perception of negatively valenced touch stimuli. The VTD provides a comprehensive resource for researchers investigating the sensory and emotional dimensions of observed touch.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 5","pages":"134"},"PeriodicalIF":4.6,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11958466/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143750907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantifying and explaining heterogeneity in meta-analytic structural equation modeling: Methods and illustrations.
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-03-31 DOI: 10.3758/s13428-025-02647-w
Zijun Ke, Han Du, Rebecca Y M Cheung, Yingtian Liang, Junling Liu, Wenqin Chen

As a method for developing and testing hypotheses, meta-analytic structural equation modeling, or MASEM, has drawn the interest of scholars. However, challenges remain in how we can model and explain meaningful heterogeneity in structural equation modeling (SEM) parameters. To address this issue, two novel methods have recently been proposed in the literature: Bayesian MASEM (BMASEM) and one-stage MASEM (OSMASEM). How the two methods can be applied to address actual psychological research questions involving heterogeneity is a topic of debate and confusion. In this study, we describe and compare the two methods using two illustrations on the mediating mechanism of mindfulness-based intervention and the factor structure of Rosenberg Self-Esteem Scale. In the illustrations, both methods were used to test the moderating effect of a covariate, to build a prediction equation for effect sizes in specific populations, and to evaluate the equivalence of standardized factor loadings of a scale. The study ends with a discussion of practical issues that may arise when applying BMASEM and OSMASEM.

元分析结构方程模型(MASEM)作为一种开发和检验假设的方法,引起了学者们的兴趣。然而,如何对结构方程建模(SEM)参数中的有意义的异质性进行建模和解释仍是一个挑战。为了解决这个问题,最近文献中提出了两种新方法:贝叶斯 MASEM(BMASEM)和单阶段 MASEM(OSMASEM)。如何将这两种方法应用于解决涉及异质性的实际心理学研究问题,是一个争论和困惑的话题。在本研究中,我们以正念干预的中介机制和罗森伯格自尊量表的因子结构为例来说明和比较这两种方法。在图解中,两种方法都被用来测试协变量的调节效应、建立特定人群效应大小的预测方程,以及评估量表标准化因子载荷的等效性。研究最后讨论了在应用 BMASEM 和 OSMASEM 时可能出现的实际问题。
{"title":"Quantifying and explaining heterogeneity in meta-analytic structural equation modeling: Methods and illustrations.","authors":"Zijun Ke, Han Du, Rebecca Y M Cheung, Yingtian Liang, Junling Liu, Wenqin Chen","doi":"10.3758/s13428-025-02647-w","DOIUrl":"https://doi.org/10.3758/s13428-025-02647-w","url":null,"abstract":"<p><p>As a method for developing and testing hypotheses, meta-analytic structural equation modeling, or MASEM, has drawn the interest of scholars. However, challenges remain in how we can model and explain meaningful heterogeneity in structural equation modeling (SEM) parameters. To address this issue, two novel methods have recently been proposed in the literature: Bayesian MASEM (BMASEM) and one-stage MASEM (OSMASEM). How the two methods can be applied to address actual psychological research questions involving heterogeneity is a topic of debate and confusion. In this study, we describe and compare the two methods using two illustrations on the mediating mechanism of mindfulness-based intervention and the factor structure of Rosenberg Self-Esteem Scale. In the illustrations, both methods were used to test the moderating effect of a covariate, to build a prediction equation for effect sizes in specific populations, and to evaluate the equivalence of standardized factor loadings of a scale. The study ends with a discussion of practical issues that may arise when applying BMASEM and OSMASEM.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 5","pages":"131"},"PeriodicalIF":4.6,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143750905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The first use of strict substitution colorimetry for collecting data on threshold perceptual color differences in humans.
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-03-31 DOI: 10.3758/s13428-025-02651-0
Sergey Gladilin, Maria Gracheva, Ivan Konovalenko, Ilya Nikolaev, Anna Nikolaeva, Mikhail Tchobanou

Colorimetry is a technology capable of quantifying human color perception. It is of great importance in vision science and color psychology, helping to develop color standards, color measurement devices, and color management systems. Nowadays, the basic method of colorimetry implies comparing two simultaneously presented stimuli (semi-fields). In this case, two different retinal areas are exposed to colors to be matched. This drawback is not inherent to the strict substitution method, which involves comparing the perception of two color stimuli alternately presented on the same area of the retina. This method was proposed almost 70 years ago by Bongard and Smirnov, but has not been developed since then because of its apparent complexity. Today, light sources with a controlled emission spectrum have come into use so much that it becomes much easier to make a colorimetric installation that implements the strict substitution method. In this paper, we propose a comprehensive procedure for implementing strict substitution colorimetry, aimed at collecting data on threshold perceptual color differences in humans. Our pilot experimental results show good consistency and repeatability. We believe that the suggested technique will allow collecting the lacking color difference data at the periphery of the wide color gamut (WCG) as well as for extremely high dynamic range (HDR) colors.

{"title":"The first use of strict substitution colorimetry for collecting data on threshold perceptual color differences in humans.","authors":"Sergey Gladilin, Maria Gracheva, Ivan Konovalenko, Ilya Nikolaev, Anna Nikolaeva, Mikhail Tchobanou","doi":"10.3758/s13428-025-02651-0","DOIUrl":"https://doi.org/10.3758/s13428-025-02651-0","url":null,"abstract":"<p><p>Colorimetry is a technology capable of quantifying human color perception. It is of great importance in vision science and color psychology, helping to develop color standards, color measurement devices, and color management systems. Nowadays, the basic method of colorimetry implies comparing two simultaneously presented stimuli (semi-fields). In this case, two different retinal areas are exposed to colors to be matched. This drawback is not inherent to the strict substitution method, which involves comparing the perception of two color stimuli alternately presented on the same area of the retina. This method was proposed almost 70 years ago by Bongard and Smirnov, but has not been developed since then because of its apparent complexity. Today, light sources with a controlled emission spectrum have come into use so much that it becomes much easier to make a colorimetric installation that implements the strict substitution method. In this paper, we propose a comprehensive procedure for implementing strict substitution colorimetry, aimed at collecting data on threshold perceptual color differences in humans. Our pilot experimental results show good consistency and repeatability. We believe that the suggested technique will allow collecting the lacking color difference data at the periphery of the wide color gamut (WCG) as well as for extremely high dynamic range (HDR) colors.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 5","pages":"132"},"PeriodicalIF":4.6,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143750906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On a generalizable approach for sample size determination in Bayesian t tests.
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-03-31 DOI: 10.3758/s13428-025-02654-x
Tsz Keung Wong, Jorge N Tendeiro

The Bayes factor is often proposed as a superior replacement to p values in testing null hypotheses for various reasons, with the availability of many user-friendly and easily accessible statistical software tools facilitating the use of Bayesian tests. Meanwhile, Bayes factor design analysis (BFDA), the counterpart of power analysis, is also proposed to ensure the maximum efficiency and informativeness of a study. Despite tools for conducting BFDA being limited and mostly relying on Monte Carlo methodology, methods based on root-finding algorithms have been recently developed (e.g., Pawel and Held, 2025), overcoming weaknesses of simulation approaches. This paper builds on these advancements by presenting a method generalizing the existing approach for conducting BFDA for sample size determination in t tests. The major advantage of the current method is that it does not assume normality of the effect size estimate, allowing more flexibility in the specification of the design and analysis priors. We developed and showcase a user-friendly Shiny app for facilitating the use of BFDA, illustrated with an empirical example. Furthermore, using our method, we explore the operating characteristics of the Bayes factors using various priors.

由于种种原因,贝叶斯因子经常被提议作为检验零假设的 p 值的优越替代品,许多用户友好、易于使用的统计软件工具的出现也为贝叶斯检验的使用提供了便利。同时,与功率分析相对应的贝叶斯因子设计分析(BFDA)也被提出来,以确保研究的最大效率和信息量。尽管进行贝叶斯因子设计分析的工具有限,而且大多依赖蒙特卡罗方法,但最近开发出了基于寻根算法的方法(如 Pawel 和 Held,2025 年),克服了模拟方法的弱点。本文在这些进展的基础上,提出了一种方法,对现有方法进行了概括,用于在 t 检验中确定样本量的 BFDA。当前方法的主要优点是不假定效应大小估计值的正态性,从而使设计和分析先验的规范更具灵活性。我们开发并展示了一个用户友好的 Shiny 应用程序,以方便使用 BFDA,并通过一个实证例子进行了说明。此外,利用我们的方法,我们探索了贝叶斯因子在不同先验条件下的运行特征。
{"title":"On a generalizable approach for sample size determination in Bayesian t tests.","authors":"Tsz Keung Wong, Jorge N Tendeiro","doi":"10.3758/s13428-025-02654-x","DOIUrl":"10.3758/s13428-025-02654-x","url":null,"abstract":"<p><p>The Bayes factor is often proposed as a superior replacement to p values in testing null hypotheses for various reasons, with the availability of many user-friendly and easily accessible statistical software tools facilitating the use of Bayesian tests. Meanwhile, Bayes factor design analysis (BFDA), the counterpart of power analysis, is also proposed to ensure the maximum efficiency and informativeness of a study. Despite tools for conducting BFDA being limited and mostly relying on Monte Carlo methodology, methods based on root-finding algorithms have been recently developed (e.g., Pawel and Held, 2025), overcoming weaknesses of simulation approaches. This paper builds on these advancements by presenting a method generalizing the existing approach for conducting BFDA for sample size determination in t tests. The major advantage of the current method is that it does not assume normality of the effect size estimate, allowing more flexibility in the specification of the design and analysis priors. We developed and showcase a user-friendly Shiny app for facilitating the use of BFDA, illustrated with an empirical example. Furthermore, using our method, we explore the operating characteristics of the Bayes factors using various priors.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 5","pages":"130"},"PeriodicalIF":4.6,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11958428/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143750904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LEyes: A lightweight framework for deep learning-based eye tracking using synthetic eye images.
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-03-31 DOI: 10.3758/s13428-025-02645-y
Sean Anthony Byrne, Virmarie Maquiling, Marcus Nyström, Enkelejda Kasneci, Diederick C Niehorster

Deep learning methods have significantly advanced the field of gaze estimation, yet the development of these algorithms is often hindered by a lack of appropriate publicly accessible training datasets. Moreover, models trained on the few available datasets often fail to generalize to new datasets due to both discrepancies in hardware and biological diversity among subjects. To mitigate these challenges, the research community has frequently turned to synthetic datasets, although this approach also has drawbacks, such as the computational resource and labor-intensive nature of creating photorealistic representations of eye images to be used as training data. In response, we introduce "Light Eyes" (LEyes), a novel framework that diverges from traditional photorealistic methods by utilizing simple synthetic image generators to train neural networks for detecting key image features like pupils and corneal reflections, diverging from traditional photorealistic approaches. LEyes facilitates the generation of synthetic data on the fly that is adaptable to any recording device and enhances the efficiency of training neural networks for a wide range of gaze-estimation tasks. Presented evaluations show that LEyes, in many cases, outperforms existing methods in accurately identifying and localizing pupils and corneal reflections across diverse datasets. Additionally, models trained using LEyes data outperform standard eye trackers while employing more cost-effective hardware, offering a promising avenue to overcome the current limitations in gaze estimation technology.

{"title":"LEyes: A lightweight framework for deep learning-based eye tracking using synthetic eye images.","authors":"Sean Anthony Byrne, Virmarie Maquiling, Marcus Nyström, Enkelejda Kasneci, Diederick C Niehorster","doi":"10.3758/s13428-025-02645-y","DOIUrl":"10.3758/s13428-025-02645-y","url":null,"abstract":"<p><p>Deep learning methods have significantly advanced the field of gaze estimation, yet the development of these algorithms is often hindered by a lack of appropriate publicly accessible training datasets. Moreover, models trained on the few available datasets often fail to generalize to new datasets due to both discrepancies in hardware and biological diversity among subjects. To mitigate these challenges, the research community has frequently turned to synthetic datasets, although this approach also has drawbacks, such as the computational resource and labor-intensive nature of creating photorealistic representations of eye images to be used as training data. In response, we introduce \"Light Eyes\" (LEyes), a novel framework that diverges from traditional photorealistic methods by utilizing simple synthetic image generators to train neural networks for detecting key image features like pupils and corneal reflections, diverging from traditional photorealistic approaches. LEyes facilitates the generation of synthetic data on the fly that is adaptable to any recording device and enhances the efficiency of training neural networks for a wide range of gaze-estimation tasks. Presented evaluations show that LEyes, in many cases, outperforms existing methods in accurately identifying and localizing pupils and corneal reflections across diverse datasets. Additionally, models trained using LEyes data outperform standard eye trackers while employing more cost-effective hardware, offering a promising avenue to overcome the current limitations in gaze estimation technology.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 5","pages":"129"},"PeriodicalIF":4.6,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11958443/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143750903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DDM-UI: A user interface in R for the discrepancy diffuse model in behavioral research.
IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-03-28 DOI: 10.3758/s13428-025-02648-9
Miguel Aguayo-Mendoza, Cristiano Valerio Dos Santos

The diffuse discrepancy model (DiffDiscM) has proven to be a valuable tool for simulating both Pavlovian and operant conditioning phenomena. However, its original implementation in Pascal (SelNet1© interface) has limitations regarding accessibility and ease of use. This paper presents DDM-UI, a new user interface developed in R for the DiffDiscM. DDM-UI offers an intuitive, open-source platform that enables researchers to configure, run, and analyze DiffDiscM simulations more efficiently. The main features of DDM-UI are described, including network architecture setup, trial and contingency definition, and result visualization. Three use cases demonstrate the practical application of DDM-UI in simulating various conditioning experiments, including superstition, Pavlovian/autoshaped impulsivity, and complex phenomena such as blocking, compound conditioning, and successive conditioning. The validation process highlights DDM-UI's ability to replicate previous findings while offering enhanced data visualization and analysis capabilities. DDM-UI represents a significant advancement in the accessibility of the DiffDiscM, facilitating its use in behavioral research and promoting reproducibility in the field. The paper also discusses the limitations of the current implementation and suggests future developments to further enhance the tool's capabilities in exploring complex learning and behavioral phenomena.

{"title":"DDM-UI: A user interface in R for the discrepancy diffuse model in behavioral research.","authors":"Miguel Aguayo-Mendoza, Cristiano Valerio Dos Santos","doi":"10.3758/s13428-025-02648-9","DOIUrl":"https://doi.org/10.3758/s13428-025-02648-9","url":null,"abstract":"<p><p>The diffuse discrepancy model (DiffDiscM) has proven to be a valuable tool for simulating both Pavlovian and operant conditioning phenomena. However, its original implementation in Pascal (SelNet1© interface) has limitations regarding accessibility and ease of use. This paper presents DDM-UI, a new user interface developed in R for the DiffDiscM. DDM-UI offers an intuitive, open-source platform that enables researchers to configure, run, and analyze DiffDiscM simulations more efficiently. The main features of DDM-UI are described, including network architecture setup, trial and contingency definition, and result visualization. Three use cases demonstrate the practical application of DDM-UI in simulating various conditioning experiments, including superstition, Pavlovian/autoshaped impulsivity, and complex phenomena such as blocking, compound conditioning, and successive conditioning. The validation process highlights DDM-UI's ability to replicate previous findings while offering enhanced data visualization and analysis capabilities. DDM-UI represents a significant advancement in the accessibility of the DiffDiscM, facilitating its use in behavioral research and promoting reproducibility in the field. The paper also discusses the limitations of the current implementation and suggests future developments to further enhance the tool's capabilities in exploring complex learning and behavioral phenomena.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 5","pages":"128"},"PeriodicalIF":4.6,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143742086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Behavior Research Methods
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1