Pub Date : 2025-11-24DOI: 10.3758/s13428-025-02875-0
Hao He, Yucheng Duan
This study explores expert-novice differences in anticipation under uncertainty by combining partially observable Markov decision process (POMDP) modeling with machine learning classification. Forty-eight participants (24 experts, 24 novices) completed a basketball pass/shot anticipation task. Through POMDP modeling, two cognitive parameters-sensory precision (SP) and prior belief (pB)-were extracted to capture internal decision processes. Results showed that experts fit the POMDP model more closely, requiring more iterations for parameter convergence and achieving higher pseudo R2 values than novices. Experts demonstrated significantly higher SP, indicating superior ability to filter key cues under noisy conditions. Their pB values remained closer to neutral, suggesting flexible reliance on prior knowledge. In contrast, novices exhibited more biased priors and a lower, more dispersed SP. Machine learning analyses revealed that SP and pB jointly formed distinct clusters for experts and novices in a two-dimensional parameter space, with classification accuracies exceeding 90% across multiple methods. These findings indicate that expertise entails both enhanced perceptual precision and adaptive prior calibration, reflecting deeper cognitive reorganization rather than simple skill increments. Our dual-parameter approach offers a model-based perspective on expert cognition and may inform future research on the multifaceted nature of expertise.
{"title":"Beyond performance: A POMDP-based machine learning framework for expert cognition.","authors":"Hao He, Yucheng Duan","doi":"10.3758/s13428-025-02875-0","DOIUrl":"https://doi.org/10.3758/s13428-025-02875-0","url":null,"abstract":"<p><p>This study explores expert-novice differences in anticipation under uncertainty by combining partially observable Markov decision process (POMDP) modeling with machine learning classification. Forty-eight participants (24 experts, 24 novices) completed a basketball pass/shot anticipation task. Through POMDP modeling, two cognitive parameters-sensory precision (SP) and prior belief (pB)-were extracted to capture internal decision processes. Results showed that experts fit the POMDP model more closely, requiring more iterations for parameter convergence and achieving higher pseudo R<sup>2</sup> values than novices. Experts demonstrated significantly higher SP, indicating superior ability to filter key cues under noisy conditions. Their pB values remained closer to neutral, suggesting flexible reliance on prior knowledge. In contrast, novices exhibited more biased priors and a lower, more dispersed SP. Machine learning analyses revealed that SP and pB jointly formed distinct clusters for experts and novices in a two-dimensional parameter space, with classification accuracies exceeding 90% across multiple methods. These findings indicate that expertise entails both enhanced perceptual precision and adaptive prior calibration, reflecting deeper cognitive reorganization rather than simple skill increments. Our dual-parameter approach offers a model-based perspective on expert cognition and may inform future research on the multifaceted nature of expertise.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 1","pages":"6"},"PeriodicalIF":3.9,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145595568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-24DOI: 10.3758/s13428-025-02901-1
Madeline Jarvis, Adam Vasarhelyi, Joe Anderson, Caitlyn Mulley, Ottmar V Lipp, Luke J Ney
The measurement of pupil size has become a topic of interest in psychology research over the past two decades due to its sensitivity to psychological processes such as arousal or cognitive load. However, pupil measurements have been limited by the necessity to conduct experiments in laboratory settings using high-quality and costly equipment. The current article describes the development and use of a jsPsych plugin and extension that incorporates an existing software that estimates pupil size using consumer-grade hardware, such as a webcam. We validated this new program (js-mEye) across two separate studies, which each manipulated screen luminance and color using a novel luminance task, as well as different levels of cognitive load using the N-back and the Stroop tasks. Changes in luminance and color produced significant changes in pupil size in the hypothesized direction. Changes in cognitive load induced in the N-back and Stroop tasks produced less clear findings; however, these findings were explained to some extent when participant engagement - indexed by task performance - was controlled for. Most importantly, all data were at least moderately correlated with data simultaneously recorded using an EyeLink 1000, suggesting that mEye was able to effectively substitute for a gold-standard eye-tracking device. This work presents an exciting future direction for pupillometry and, with further validation, may present a platform for measuring pupil size in online research studies, as well as in laboratory-based experiments that require minimal equipment.
{"title":"js-mEye: An extension and plugin for the measurement of pupil size in the online platform jsPsych.","authors":"Madeline Jarvis, Adam Vasarhelyi, Joe Anderson, Caitlyn Mulley, Ottmar V Lipp, Luke J Ney","doi":"10.3758/s13428-025-02901-1","DOIUrl":"https://doi.org/10.3758/s13428-025-02901-1","url":null,"abstract":"<p><p>The measurement of pupil size has become a topic of interest in psychology research over the past two decades due to its sensitivity to psychological processes such as arousal or cognitive load. However, pupil measurements have been limited by the necessity to conduct experiments in laboratory settings using high-quality and costly equipment. The current article describes the development and use of a jsPsych plugin and extension that incorporates an existing software that estimates pupil size using consumer-grade hardware, such as a webcam. We validated this new program (js-mEye) across two separate studies, which each manipulated screen luminance and color using a novel luminance task, as well as different levels of cognitive load using the N-back and the Stroop tasks. Changes in luminance and color produced significant changes in pupil size in the hypothesized direction. Changes in cognitive load induced in the N-back and Stroop tasks produced less clear findings; however, these findings were explained to some extent when participant engagement - indexed by task performance - was controlled for. Most importantly, all data were at least moderately correlated with data simultaneously recorded using an EyeLink 1000, suggesting that mEye was able to effectively substitute for a gold-standard eye-tracking device. This work presents an exciting future direction for pupillometry and, with further validation, may present a platform for measuring pupil size in online research studies, as well as in laboratory-based experiments that require minimal equipment.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 1","pages":"8"},"PeriodicalIF":3.9,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145595586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-24DOI: 10.3758/s13428-025-02865-2
Pablo Martínez-López, Antonio Vázquez-Millán, Francisco Garre-Frutos, David Luque
Animal research has shown that repeatedly performing a rewarded action leads to its transition into a habit-an inflexible response controlled by stimulus-response associations. Efforts to reproduce this principle in humans have yielded mixed results. Only two laboratory paradigms have demonstrated behavior habitualization following extensive instrumental training compared to minimal training: the forced-response task and the "aliens" outcome-devaluation task. These paradigms assess habitualization through distinct measures. The forced-response task focuses on the persistence of a trained response when a reversal is required, whereas the outcome-devaluation task measures reaction time switch costs-slowdowns in goal-directed responses conflicting with the trained habit. Although both measures have produced results consistent with the learning theory-showing stronger evidence of habits in overtrained conditions-their construct validity remains insufficiently established. In this study, participants completed 4 days of training in each paradigm. We replicated previous results in the forced-response task; in the outcome-devaluation task, a similar pattern emerged, observing the loss of a response speed advantage gained through training. We then examined the reliability of each measure and evaluated their convergent validity. Habitual responses in the forced-response task and reaction time switch costs in the outcome-devaluation task demonstrated good reliability, allowing us to assess whether individual differences remained stable. However, the two measures were not associated, providing no evidence of convergent validity. This suggests that these measures capture distinct aspects of the balance between habitual and goal-directed control. Our results highlight the need for further evaluation of the validity and reliability of current measures of habitual control in humans.
{"title":"Assessing the validity evidence for habit measures based on time pressure.","authors":"Pablo Martínez-López, Antonio Vázquez-Millán, Francisco Garre-Frutos, David Luque","doi":"10.3758/s13428-025-02865-2","DOIUrl":"https://doi.org/10.3758/s13428-025-02865-2","url":null,"abstract":"<p><p>Animal research has shown that repeatedly performing a rewarded action leads to its transition into a habit-an inflexible response controlled by stimulus-response associations. Efforts to reproduce this principle in humans have yielded mixed results. Only two laboratory paradigms have demonstrated behavior habitualization following extensive instrumental training compared to minimal training: the forced-response task and the \"aliens\" outcome-devaluation task. These paradigms assess habitualization through distinct measures. The forced-response task focuses on the persistence of a trained response when a reversal is required, whereas the outcome-devaluation task measures reaction time switch costs-slowdowns in goal-directed responses conflicting with the trained habit. Although both measures have produced results consistent with the learning theory-showing stronger evidence of habits in overtrained conditions-their construct validity remains insufficiently established. In this study, participants completed 4 days of training in each paradigm. We replicated previous results in the forced-response task; in the outcome-devaluation task, a similar pattern emerged, observing the loss of a response speed advantage gained through training. We then examined the reliability of each measure and evaluated their convergent validity. Habitual responses in the forced-response task and reaction time switch costs in the outcome-devaluation task demonstrated good reliability, allowing us to assess whether individual differences remained stable. However, the two measures were not associated, providing no evidence of convergent validity. This suggests that these measures capture distinct aspects of the balance between habitual and goal-directed control. Our results highlight the need for further evaluation of the validity and reliability of current measures of habitual control in humans.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 1","pages":"7"},"PeriodicalIF":3.9,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145595565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-24DOI: 10.3758/s13428-025-02892-z
Kai-Fu Yang, Yong-Jie Li
Visual attention plays a critical role when our visual system executes active visual tasks by interacting with the physical scene. However, how to encode visual object relationships in the psychological world of the brain deserves exploration. Predicting visual fixations or scanpaths is a usual way to explore the visual attention and behaviors of human observers when viewing a scene. Most existing methods encode visual attention using individual fixations or scanpaths derived from raw gaze-shift data collected from human observers. This may not capture the common attention pattern well, because without considering the semantic information of the viewed scene, raw gaze shift data alone contain high inter- and intra-observer variability. To address this issue, we propose a new attention representation, called visual attention graph (VAG), to simultaneously code the visual saliency and scanpath in a graph-based representation and better reveal the common attention behavior of human observers. In the visual attention graph, the semantic-based scanpath is defined by the path on the graph, while the saliency of objects can be obtained by computing fixation density on each node. Systemic experiments demonstrate that the proposed attention graph combined with our new evaluation metrics provides a better benchmark for evaluating attention prediction methods. Meanwhile, extra experiments demonstrate the promising potential of the proposed attention graph in assessing human cognitive states, such as autism spectrum disorder screening and age classification.
{"title":"Visual attention graph.","authors":"Kai-Fu Yang, Yong-Jie Li","doi":"10.3758/s13428-025-02892-z","DOIUrl":"https://doi.org/10.3758/s13428-025-02892-z","url":null,"abstract":"<p><p>Visual attention plays a critical role when our visual system executes active visual tasks by interacting with the physical scene. However, how to encode visual object relationships in the psychological world of the brain deserves exploration. Predicting visual fixations or scanpaths is a usual way to explore the visual attention and behaviors of human observers when viewing a scene. Most existing methods encode visual attention using individual fixations or scanpaths derived from raw gaze-shift data collected from human observers. This may not capture the common attention pattern well, because without considering the semantic information of the viewed scene, raw gaze shift data alone contain high inter- and intra-observer variability. To address this issue, we propose a new attention representation, called visual attention graph (VAG), to simultaneously code the visual saliency and scanpath in a graph-based representation and better reveal the common attention behavior of human observers. In the visual attention graph, the semantic-based scanpath is defined by the path on the graph, while the saliency of objects can be obtained by computing fixation density on each node. Systemic experiments demonstrate that the proposed attention graph combined with our new evaluation metrics provides a better benchmark for evaluating attention prediction methods. Meanwhile, extra experiments demonstrate the promising potential of the proposed attention graph in assessing human cognitive states, such as autism spectrum disorder screening and age classification.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 1","pages":"4"},"PeriodicalIF":3.9,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145595583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Epistemic emotions, and in particular curiosity, seem to enhance memory for both the specific information that stimulates the individual's curiosity and information presented in close temporal proximity. Most studies on memory and curiosity have adopted trivia questions to elicit curiosity. However, the amount and range of interest that trivia questions elicit are unclear, and there is no established, universal trivia item pool guaranteed to elicit comparable levels of curiosity across individuals of all ages. Thus, one substantial challenge when studying curiosity is systematically inducing it in controlled experimental settings. Recently, an innovative database called Magic Curiosity Arousing Tricks (MagicCATs) has been published. This database includes 166 short magic-trick video clips that adopt different materials and is designed to induce curiosity, surprise, and interest. Here, we aimed to validate this dataset in the Italian population by reporting the basic characteristics and the norms of the magic-trick video clips in younger and middle-aged adults. We also carried out association rule learning, a rule-based machine learning and data mining method to aid understanding of the co-occurrences between the different epistemic emotions and aid researchers in stimulus selection. Association rules underline relationships or associations between the variables in our datasets and can be used in association with descriptive statistics for stimulus selection in psychological experiments.
{"title":"The Magic Curiosity Arousing Tricks (MagicCATs) database in Italian younger and middle-aged adults: Descriptive statistics and rule-based machine learning.","authors":"Caterina Padulo, Michela Ponticorvo, Beth Fairfield","doi":"10.3758/s13428-025-02884-z","DOIUrl":"10.3758/s13428-025-02884-z","url":null,"abstract":"<p><p>Epistemic emotions, and in particular curiosity, seem to enhance memory for both the specific information that stimulates the individual's curiosity and information presented in close temporal proximity. Most studies on memory and curiosity have adopted trivia questions to elicit curiosity. However, the amount and range of interest that trivia questions elicit are unclear, and there is no established, universal trivia item pool guaranteed to elicit comparable levels of curiosity across individuals of all ages. Thus, one substantial challenge when studying curiosity is systematically inducing it in controlled experimental settings. Recently, an innovative database called Magic Curiosity Arousing Tricks (MagicCATs) has been published. This database includes 166 short magic-trick video clips that adopt different materials and is designed to induce curiosity, surprise, and interest. Here, we aimed to validate this dataset in the Italian population by reporting the basic characteristics and the norms of the magic-trick video clips in younger and middle-aged adults. We also carried out association rule learning, a rule-based machine learning and data mining method to aid understanding of the co-occurrences between the different epistemic emotions and aid researchers in stimulus selection. Association rules underline relationships or associations between the variables in our datasets and can be used in association with descriptive statistics for stimulus selection in psychological experiments.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 1","pages":"1"},"PeriodicalIF":3.9,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12630270/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145556271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.3758/s13428-025-02883-0
Naomi K Middelmann, Jean-Paul Calbimonte, Emily B Wake, Manon E Jaquerod, Nastia Junod, Jennifer Glaus, Olga Sidiropoulou, Kerstin J Plessen, Micah M Murray, Matthew J Vowels
Video recordings are commonplace for observing human and animal behaviours, including interindividual interactions. In studies of humans, analyses for clinical applications remain particularly cumbersome, requiring human-based annotation that is time-consuming, bias-prone, and cost-ineffective. Attempts to use machine learning to address these limitations still oftentimes require highly standardised environments, scripted scenarios, and forward-facing individuals. Here, we provide the ADVANCE toolkit, an automated video annotation pipeline. The versatility of ADVANCE is demonstrated with schoolchildren and adults in an unscripted clinical setting within an art classroom environment that included 2-5 individuals, dynamic occlusions, and large variations in actions. We accurately detected each individual, tracked them simultaneously throughout the duration of the recording (including when an individual left and re-entered the field of view), estimated the position of their skeletal joints, and labelled their poses. By resolving challenges of manual annotation, we radically enhance the ability to extract information from video recordings across different scenarios and settings. This toolkit reduces clinical workload and enhances the ethological validity of video-based assessments, offering scalable solutions for behaviour analyses in naturalistic contexts.
{"title":"The ADVANCE toolkit: Automated descriptive video annotation in naturalistic child environments.","authors":"Naomi K Middelmann, Jean-Paul Calbimonte, Emily B Wake, Manon E Jaquerod, Nastia Junod, Jennifer Glaus, Olga Sidiropoulou, Kerstin J Plessen, Micah M Murray, Matthew J Vowels","doi":"10.3758/s13428-025-02883-0","DOIUrl":"10.3758/s13428-025-02883-0","url":null,"abstract":"<p><p>Video recordings are commonplace for observing human and animal behaviours, including interindividual interactions. In studies of humans, analyses for clinical applications remain particularly cumbersome, requiring human-based annotation that is time-consuming, bias-prone, and cost-ineffective. Attempts to use machine learning to address these limitations still oftentimes require highly standardised environments, scripted scenarios, and forward-facing individuals. Here, we provide the ADVANCE toolkit, an automated video annotation pipeline. The versatility of ADVANCE is demonstrated with schoolchildren and adults in an unscripted clinical setting within an art classroom environment that included 2-5 individuals, dynamic occlusions, and large variations in actions. We accurately detected each individual, tracked them simultaneously throughout the duration of the recording (including when an individual left and re-entered the field of view), estimated the position of their skeletal joints, and labelled their poses. By resolving challenges of manual annotation, we radically enhance the ability to extract information from video recordings across different scenarios and settings. This toolkit reduces clinical workload and enhances the ethological validity of video-based assessments, offering scalable solutions for behaviour analyses in naturalistic contexts.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 1","pages":"3"},"PeriodicalIF":3.9,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12630247/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145556145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.3758/s13428-025-02890-1
Diederick C Niehorster, Szymon Tamborski, Marcus Nyström, Robert Konklewski, Valentyna Pryhodiuk, Krzysztof Tołpa, Roy S Hessels, Maciej Szkulmowski, Ignace T C Hooge
In many tasks, participants are instructed to fixate a target. While maintaining fixation, the eyes nonetheless make small fixational eye movements, such as microsaccades and drift. Previous work has examined the effect of fixation point design on fixation stability and the amount and spatial extent of fixational eye movements. However, much of this work used video-based eye trackers, which have insufficient resolution and suffer from artefacts that make them unsuitable for this topic of study. Here, we therefore use a retinal eye tracker, which offers superior resolution and does not suffer from the same artifacts to reexamine what fixation point design minimizes fixational eye movements. Participants were shown five fixation targets in two target polarity conditions, while the overall spatial spread of their gaze position during fixation, as well as their microsaccades and fixational drift, were examined. We found that gaze was more stable for white-on-black than black-on-grey fixation targets. Gaze was also more stable (lower spatial spread, microsaccade, and drift displacement) for fixation targets with a small central feature but these targets also yielded higher microsaccade rates than larger fixation targets without such a small central feature. In conclusion, there is not a single best fixation target that minimizes all aspects of fixational eye movements. Instead, if one wishes to optimize for minimal spatial spread of the gaze position, microsaccade or drift displacements, we recommend using a target with a small central feature. If one instead wishes to optimize for the lowest microsaccade rate, we recommend using a larger target without a small central feature.
{"title":"The best fixation target revisited: New insights from retinal eye tracking.","authors":"Diederick C Niehorster, Szymon Tamborski, Marcus Nyström, Robert Konklewski, Valentyna Pryhodiuk, Krzysztof Tołpa, Roy S Hessels, Maciej Szkulmowski, Ignace T C Hooge","doi":"10.3758/s13428-025-02890-1","DOIUrl":"10.3758/s13428-025-02890-1","url":null,"abstract":"<p><p>In many tasks, participants are instructed to fixate a target. While maintaining fixation, the eyes nonetheless make small fixational eye movements, such as microsaccades and drift. Previous work has examined the effect of fixation point design on fixation stability and the amount and spatial extent of fixational eye movements. However, much of this work used video-based eye trackers, which have insufficient resolution and suffer from artefacts that make them unsuitable for this topic of study. Here, we therefore use a retinal eye tracker, which offers superior resolution and does not suffer from the same artifacts to reexamine what fixation point design minimizes fixational eye movements. Participants were shown five fixation targets in two target polarity conditions, while the overall spatial spread of their gaze position during fixation, as well as their microsaccades and fixational drift, were examined. We found that gaze was more stable for white-on-black than black-on-grey fixation targets. Gaze was also more stable (lower spatial spread, microsaccade, and drift displacement) for fixation targets with a small central feature but these targets also yielded higher microsaccade rates than larger fixation targets without such a small central feature. In conclusion, there is not a single best fixation target that minimizes all aspects of fixational eye movements. Instead, if one wishes to optimize for minimal spatial spread of the gaze position, microsaccade or drift displacements, we recommend using a target with a small central feature. If one instead wishes to optimize for the lowest microsaccade rate, we recommend using a larger target without a small central feature.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 1","pages":"2"},"PeriodicalIF":3.9,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12630294/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145556214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-18DOI: 10.3758/s13428-025-02787-z
R Philip Chalmers
This article presents the software Spower, an R package designed as a general-purpose Monte Carlo simulation experiment tool to perform power analyses. The package includes complete customization capabilities with support for five distinct (expected) power analysis criteria (prospective/post hoc, a priori, compromise, sensitivity, and criterion), each of which reports the sampling uncertainty associated with the resulting estimates. Researchers may choose to define their own population generating and analysis function for their tailored simulation experiments, or may choose from a selection of the predefined simulation experiments available within the package. To facilitate comparability and further extensibility, simulation counterparts of the subroutines from the popular stand-alone software G*Power 3.1 (Faul et al., Behavior Research Methods, 41(4), 1149-1160 2009) are included within the package, along with other useful simulation experiment subroutines for improving estimation precision and creating visualizations.
本文介绍了spwer软件,这是一个R包,设计为通用的蒙特卡罗模拟实验工具,用于执行功率分析。该软件包包括完整的定制功能,支持五种不同的(预期的)功率分析标准(预期/事后、先验、折衷、灵敏度和标准),每一种都报告与结果估计相关的抽样不确定性。研究人员可以选择定义自己的人口生成和分析功能,为他们量身定制的模拟实验,或者可以选择从软件包中提供的预定义模拟实验的选择。为了促进可比性和进一步的可扩展性,包中包含了流行的独立软件G*Power 3.1 (Faul等人,Behavior Research Methods, 41(4), 1149-1160 2009)子程序的仿真对口,以及其他有用的仿真实验子程序,以提高估计精度和创建可视化。
{"title":"Spower: A general-purpose Monte Carlo simulation power analysis program.","authors":"R Philip Chalmers","doi":"10.3758/s13428-025-02787-z","DOIUrl":"https://doi.org/10.3758/s13428-025-02787-z","url":null,"abstract":"<p><p>This article presents the software Spower, an R package designed as a general-purpose Monte Carlo simulation experiment tool to perform power analyses. The package includes complete customization capabilities with support for five distinct (expected) power analysis criteria (prospective/post hoc, a priori, compromise, sensitivity, and criterion), each of which reports the sampling uncertainty associated with the resulting estimates. Researchers may choose to define their own population generating and analysis function for their tailored simulation experiments, or may choose from a selection of the predefined simulation experiments available within the package. To facilitate comparability and further extensibility, simulation counterparts of the subroutines from the popular stand-alone software G*Power 3.1 (Faul et al., Behavior Research Methods, 41(4), 1149-1160 2009) are included within the package, along with other useful simulation experiment subroutines for improving estimation precision and creating visualizations.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 12","pages":"348"},"PeriodicalIF":3.9,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145547827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-18DOI: 10.3758/s13428-025-02899-6
Sami Boudelaa, Manuel Carreiras, Nazrin Jariya, Manuel Perea
{"title":"Correction: SUBTLEX-AR: Arabic word distributional characteristics based on movie subtitles.","authors":"Sami Boudelaa, Manuel Carreiras, Nazrin Jariya, Manuel Perea","doi":"10.3758/s13428-025-02899-6","DOIUrl":"https://doi.org/10.3758/s13428-025-02899-6","url":null,"abstract":"","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 12","pages":"349"},"PeriodicalIF":3.9,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145547756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The self-reported Language and Social Background Questionnaire (LSBQ) measures an individual's language proficiency and usage quantitatively. This cross-sectional study aims to evaluate the psychometric properties of the LSBQ in the Persian (Farsi) language. A total of 325 adults aged between 15 and 59 years (mean age = 21.00 years, SD = 3.56; 251 females, 70 males) from Tabriz and Tehran participated in this study. To evaluate the Language and Social Background Questionnaire (LSBQ), exploratory factor analysis (EFA) was employed. The psychometric properties of the Persian LSBQ were assessed through various validity measures, as well as reliability analysis and receiver operating characteristic (ROC) curve analysis. The overall content validity ratio for the questionnaire was 0.98, with an impact score of 4.47. The internal consistency of the scale was satisfactory, with a Cronbach's alpha of 0.707. The EFA identified five key factors: "dominant language at home and community," "non-Persian use," "non-Persian proficiency," "Persian comprehension," and "switching". Using Youden's J criterion, an optimal cut-off points of - 1.00 was determined to effectively distinguish between monolinguals and non-monolinguals. To assess the convergent and discriminant validity of the instrument, Spearman's correlation was utilized to analyze the relationships among the variables. The Persian version of the LSBQ is a reliable and valid tool for assessing language proficiency and usage among Persian-speaking participants. It effectively distinguishes between monolingual and non-monolingual individuals. Researchers and clinicians can utilize the LSBQ effectively, provided it aligns with their specific research questions and the language experiences of their target population.
{"title":"Cross-cultural adaptation of the Language and Social Background Questionnaire: Psychometric properties emerging from the Persian version.","authors":"Mehri Maleki, Fatemeh Jahanjoo, Samin Shibafar, Gelavizh Karimijavan, Mohammad Hassan Torabi, Farnoush Jarollahi","doi":"10.3758/s13428-025-02831-y","DOIUrl":"https://doi.org/10.3758/s13428-025-02831-y","url":null,"abstract":"<p><p>The self-reported Language and Social Background Questionnaire (LSBQ) measures an individual's language proficiency and usage quantitatively. This cross-sectional study aims to evaluate the psychometric properties of the LSBQ in the Persian (Farsi) language. A total of 325 adults aged between 15 and 59 years (mean age = 21.00 years, SD = 3.56; 251 females, 70 males) from Tabriz and Tehran participated in this study. To evaluate the Language and Social Background Questionnaire (LSBQ), exploratory factor analysis (EFA) was employed. The psychometric properties of the Persian LSBQ were assessed through various validity measures, as well as reliability analysis and receiver operating characteristic (ROC) curve analysis. The overall content validity ratio for the questionnaire was 0.98, with an impact score of 4.47. The internal consistency of the scale was satisfactory, with a Cronbach's alpha of 0.707. The EFA identified five key factors: \"dominant language at home and community,\" \"non-Persian use,\" \"non-Persian proficiency,\" \"Persian comprehension,\" and \"switching\". Using Youden's J criterion, an optimal cut-off points of - 1.00 was determined to effectively distinguish between monolinguals and non-monolinguals. To assess the convergent and discriminant validity of the instrument, Spearman's correlation was utilized to analyze the relationships among the variables. The Persian version of the LSBQ is a reliable and valid tool for assessing language proficiency and usage among Persian-speaking participants. It effectively distinguishes between monolingual and non-monolingual individuals. Researchers and clinicians can utilize the LSBQ effectively, provided it aligns with their specific research questions and the language experiences of their target population.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 12","pages":"346"},"PeriodicalIF":3.9,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145538883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}