Pub Date : 2024-03-01Epub Date: 2023-06-08DOI: 10.3758/s13428-023-02101-9
Brittany A Mok, Vibha Viswanathan, Agudemu Borjigin, Ravinderjit Singh, Homeira Kafi, Hari M Bharadwaj
Anonymous web-based experiments are increasingly used in many domains of behavioral research. However, online studies of auditory perception, especially of psychoacoustic phenomena pertaining to low-level sensory processing, are challenging because of limited available control of the acoustics, and the inability to perform audiometry to confirm normal-hearing status of participants. Here, we outline our approach to mitigate these challenges and validate our procedures by comparing web-based measurements to lab-based data on a range of classic psychoacoustic tasks. Individual tasks were created using jsPsych, an open-source JavaScript front-end library. Dynamic sequences of psychoacoustic tasks were implemented using Django, an open-source library for web applications, and combined with consent pages, questionnaires, and debriefing pages. Subjects were recruited via Prolific, a subject recruitment platform for web-based studies. Guided by a meta-analysis of lab-based data, we developed and validated a screening procedure to select participants for (putative) normal-hearing status based on their responses in a suprathreshold task and a survey. Headphone use was standardized by supplementing procedures from prior literature with a binaural hearing task. Individuals meeting all criteria were re-invited to complete a range of classic psychoacoustic tasks. For the re-invited participants, absolute thresholds were in excellent agreement with lab-based data for fundamental frequency discrimination, gap detection, and sensitivity to interaural time delay and level difference. Furthermore, word identification scores, consonant confusion patterns, and co-modulation masking release effect also matched lab-based studies. Our results suggest that web-based psychoacoustics is a viable complement to lab-based research. Source code for our infrastructure is provided.
{"title":"Web-based psychoacoustics: Hearing screening, infrastructure, and validation.","authors":"Brittany A Mok, Vibha Viswanathan, Agudemu Borjigin, Ravinderjit Singh, Homeira Kafi, Hari M Bharadwaj","doi":"10.3758/s13428-023-02101-9","DOIUrl":"10.3758/s13428-023-02101-9","url":null,"abstract":"<p><p>Anonymous web-based experiments are increasingly used in many domains of behavioral research. However, online studies of auditory perception, especially of psychoacoustic phenomena pertaining to low-level sensory processing, are challenging because of limited available control of the acoustics, and the inability to perform audiometry to confirm normal-hearing status of participants. Here, we outline our approach to mitigate these challenges and validate our procedures by comparing web-based measurements to lab-based data on a range of classic psychoacoustic tasks. Individual tasks were created using jsPsych, an open-source JavaScript front-end library. Dynamic sequences of psychoacoustic tasks were implemented using Django, an open-source library for web applications, and combined with consent pages, questionnaires, and debriefing pages. Subjects were recruited via Prolific, a subject recruitment platform for web-based studies. Guided by a meta-analysis of lab-based data, we developed and validated a screening procedure to select participants for (putative) normal-hearing status based on their responses in a suprathreshold task and a survey. Headphone use was standardized by supplementing procedures from prior literature with a binaural hearing task. Individuals meeting all criteria were re-invited to complete a range of classic psychoacoustic tasks. For the re-invited participants, absolute thresholds were in excellent agreement with lab-based data for fundamental frequency discrimination, gap detection, and sensitivity to interaural time delay and level difference. Furthermore, word identification scores, consonant confusion patterns, and co-modulation masking release effect also matched lab-based studies. Our results suggest that web-based psychoacoustics is a viable complement to lab-based research. Source code for our infrastructure is provided.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"1433-1448"},"PeriodicalIF":4.6,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10704001/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9640413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-06-02DOI: 10.3758/s13428-023-02106-4
Alexander P Christensen, Luis Eduardo Garrido, Kiero Guerra-Peña, Hudson Golino
Identifying the correct number of factors in multivariate data is fundamental to psychological measurement. Factor analysis has a long tradition in the field, but it has been challenged recently by exploratory graph analysis (EGA), an approach based on network psychometrics. EGA first estimates a network and then applies the Walktrap community detection algorithm. Simulation studies have demonstrated that EGA has comparable or better accuracy for recovering the same number of communities as there are factors in the simulated data than factor analytic methods. Despite EGA's effectiveness, there has yet to be an investigation into whether other sparsity induction methods or community detection algorithms could achieve equivalent or better performance. Furthermore, unidimensional structures are fundamental to psychological measurement yet they have been sparsely studied in simulations using community detection algorithms. In the present study, we performed a Monte Carlo simulation using the zero-order correlation matrix, GLASSO, and two variants of a non-regularized partial correlation sparsity induction methods with several community detection algorithms. We examined the performance of these method-algorithm combinations in both continuous and polytomous data across a variety of conditions. The results indicate that the Fast-greedy, Louvain, and Walktrap algorithms paired with the GLASSO method were consistently among the most accurate and least-biased overall.
{"title":"Comparing community detection algorithms in psychometric networks: A Monte Carlo simulation.","authors":"Alexander P Christensen, Luis Eduardo Garrido, Kiero Guerra-Peña, Hudson Golino","doi":"10.3758/s13428-023-02106-4","DOIUrl":"10.3758/s13428-023-02106-4","url":null,"abstract":"<p><p>Identifying the correct number of factors in multivariate data is fundamental to psychological measurement. Factor analysis has a long tradition in the field, but it has been challenged recently by exploratory graph analysis (EGA), an approach based on network psychometrics. EGA first estimates a network and then applies the Walktrap community detection algorithm. Simulation studies have demonstrated that EGA has comparable or better accuracy for recovering the same number of communities as there are factors in the simulated data than factor analytic methods. Despite EGA's effectiveness, there has yet to be an investigation into whether other sparsity induction methods or community detection algorithms could achieve equivalent or better performance. Furthermore, unidimensional structures are fundamental to psychological measurement yet they have been sparsely studied in simulations using community detection algorithms. In the present study, we performed a Monte Carlo simulation using the zero-order correlation matrix, GLASSO, and two variants of a non-regularized partial correlation sparsity induction methods with several community detection algorithms. We examined the performance of these method-algorithm combinations in both continuous and polytomous data across a variety of conditions. The results indicate that the Fast-greedy, Louvain, and Walktrap algorithms paired with the GLASSO method were consistently among the most accurate and least-biased overall.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"1485-1505"},"PeriodicalIF":4.6,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9693521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-06-15DOI: 10.3758/s13428-023-02122-4
David Goretzko, John Ruscio
Developing psychological assessment instruments often involves exploratory factor analyses, during which one must determine the number of factors to retain. Several factor-retention criteria have emerged that can infer this number from empirical data. Most recently, simulation-based procedures like the comparison data approach have shown the most accurate estimation of dimensionality. The factor forest, an approach combining extensive data simulation and machine learning modeling, showed even higher accuracy across various common data conditions. Because this approach is very computationally costly, we combine the factor forest and the comparison data approach to present the comparison data forest. In an evaluation study, we compared this new method with the common comparison data approach and identified optimal parameter settings for both methods given various data conditions. The new comparison data forest approach achieved slightly higher overall accuracy, though there were some important differences under certain data conditions. The CD approach tended to underfactor and the CDF tended to overfactor, and their results were also complementary in that for the 81.7% of instances when they identified the same number of factors, these results were correct 96.6% of the time.
{"title":"The comparison data forest: A new comparison data approach to determine the number of factors in exploratory factor analysis.","authors":"David Goretzko, John Ruscio","doi":"10.3758/s13428-023-02122-4","DOIUrl":"10.3758/s13428-023-02122-4","url":null,"abstract":"<p><p>Developing psychological assessment instruments often involves exploratory factor analyses, during which one must determine the number of factors to retain. Several factor-retention criteria have emerged that can infer this number from empirical data. Most recently, simulation-based procedures like the comparison data approach have shown the most accurate estimation of dimensionality. The factor forest, an approach combining extensive data simulation and machine learning modeling, showed even higher accuracy across various common data conditions. Because this approach is very computationally costly, we combine the factor forest and the comparison data approach to present the comparison data forest. In an evaluation study, we compared this new method with the common comparison data approach and identified optimal parameter settings for both methods given various data conditions. The new comparison data forest approach achieved slightly higher overall accuracy, though there were some important differences under certain data conditions. The CD approach tended to underfactor and the CDF tended to overfactor, and their results were also complementary in that for the 81.7% of instances when they identified the same number of factors, these results were correct 96.6% of the time.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"1838-1851"},"PeriodicalIF":4.6,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10991039/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9696494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-06-29DOI: 10.3758/s13428-023-02124-2
Rakoen Maertens, Friedrich M Götz, Hudson F Golino, Jon Roozenbeek, Claudia R Schneider, Yara Kyrychenko, John R Kerr, Stefan Stieger, William P McClanahan, Karly Drabot, James He, Sander van der Linden
Interest in the psychology of misinformation has exploded in recent years. Despite ample research, to date there is no validated framework to measure misinformation susceptibility. Therefore, we introduce Verification done, a nuanced interpretation schema and assessment tool that simultaneously considers Veracity discernment, and its distinct, measurable abilities (real/fake news detection), and biases (distrust/naïvité-negative/positive judgment bias). We then conduct three studies with seven independent samples (Ntotal = 8504) to show how to develop, validate, and apply the Misinformation Susceptibility Test (MIST). In Study 1 (N = 409) we use a neural network language model to generate items, and use three psychometric methods-factor analysis, item response theory, and exploratory graph analysis-to create the MIST-20 (20 items; completion time < 2 minutes), the MIST-16 (16 items; < 2 minutes), and the MIST-8 (8 items; < 1 minute). In Study 2 (N = 7674) we confirm the internal and predictive validity of the MIST in five national quota samples (US, UK), across 2 years, from three different sampling platforms-Respondi, CloudResearch, and Prolific. We also explore the MIST's nomological net and generate age-, region-, and country-specific norm tables. In Study 3 (N = 421) we demonstrate how the MIST-in conjunction with Verification done-can provide novel insights on existing psychological interventions, thereby advancing theory development. Finally, we outline the versatile implementations of the MIST as a screening tool, covariate, and intervention evaluation framework. As all methods are transparently reported and detailed, this work will allow other researchers to create similar scales or adapt them for any population of interest.
{"title":"The Misinformation Susceptibility Test (MIST): A psychometrically validated measure of news veracity discernment.","authors":"Rakoen Maertens, Friedrich M Götz, Hudson F Golino, Jon Roozenbeek, Claudia R Schneider, Yara Kyrychenko, John R Kerr, Stefan Stieger, William P McClanahan, Karly Drabot, James He, Sander van der Linden","doi":"10.3758/s13428-023-02124-2","DOIUrl":"10.3758/s13428-023-02124-2","url":null,"abstract":"<p><p>Interest in the psychology of misinformation has exploded in recent years. Despite ample research, to date there is no validated framework to measure misinformation susceptibility. Therefore, we introduce Verification done, a nuanced interpretation schema and assessment tool that simultaneously considers Veracity discernment, and its distinct, measurable abilities (real/fake news detection), and biases (distrust/naïvité-negative/positive judgment bias). We then conduct three studies with seven independent samples (N<sub>total</sub> = 8504) to show how to develop, validate, and apply the Misinformation Susceptibility Test (MIST). In Study 1 (N = 409) we use a neural network language model to generate items, and use three psychometric methods-factor analysis, item response theory, and exploratory graph analysis-to create the MIST-20 (20 items; completion time < 2 minutes), the MIST-16 (16 items; < 2 minutes), and the MIST-8 (8 items; < 1 minute). In Study 2 (N = 7674) we confirm the internal and predictive validity of the MIST in five national quota samples (US, UK), across 2 years, from three different sampling platforms-Respondi, CloudResearch, and Prolific. We also explore the MIST's nomological net and generate age-, region-, and country-specific norm tables. In Study 3 (N = 421) we demonstrate how the MIST-in conjunction with Verification done-can provide novel insights on existing psychological interventions, thereby advancing theory development. Finally, we outline the versatile implementations of the MIST as a screening tool, covariate, and intervention evaluation framework. As all methods are transparently reported and detailed, this work will allow other researchers to create similar scales or adapt them for any population of interest.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"1863-1899"},"PeriodicalIF":4.6,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10991074/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9696495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-06-20DOI: 10.3758/s13428-023-02161-x
Joshua R de Leeuw
DataPipe ( https://pipe.jspsych.org ) is a tool that allows researchers to save data from a behavioral experiment directly to the Open Science Framework. Researchers can configure data storage options for an experiment on the DataPipe website and then use the DataPipe API to send data to the Open Science Framework from any Internet-connected experiment. DataPipe is free to use and open-source. This paper describes the design of DataPipe and how it can help researchers adopt the practice of born-open data collection.
{"title":"DataPipe: Born-open data collection for online experiments.","authors":"Joshua R de Leeuw","doi":"10.3758/s13428-023-02161-x","DOIUrl":"10.3758/s13428-023-02161-x","url":null,"abstract":"<p><p>DataPipe ( https://pipe.jspsych.org ) is a tool that allows researchers to save data from a behavioral experiment directly to the Open Science Framework. Researchers can configure data storage options for an experiment on the DataPipe website and then use the DataPipe API to send data to the Open Science Framework from any Internet-connected experiment. DataPipe is free to use and open-source. This paper describes the design of DataPipe and how it can help researchers adopt the practice of born-open data collection.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"2499-2506"},"PeriodicalIF":4.6,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9723042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-04-11DOI: 10.3758/s13428-023-02111-7
Jean-Paul Snijder, Rongxiang Tang, Julie M Bugg, Andrew R A Conway, Todd S Braver
The domain of cognitive control has been a major focus of experimental, neuroscience, and individual differences research. Currently, however, no theory of cognitive control successfully unifies both experimental and individual differences findings. Some perspectives deny that there even exists a unified psychometric cognitive control construct to be measured at all. These shortcomings of the current literature may reflect the fact that current cognitive control paradigms are optimized for the detection of within-subject experimental effects rather than individual differences. In the current study, we examine the psychometric properties of the Dual Mechanisms of Cognitive Control (DMCC) task battery, which was designed in accordance with a theoretical framework that postulates common sources of within-subject and individual differences variation. We evaluated both internal consistency and test-retest reliability, and for the latter, utilized both classical test theory measures (i.e., split-half methods, intraclass correlation) and newer hierarchical Bayesian estimation of generative models. Although traditional psychometric measures suggested poor reliability, the hierarchical Bayesian models indicated a different pattern, with good to excellent test-retest reliability in almost all tasks and conditions examined. Moreover, within-task, between-condition correlations were generally increased when using the Bayesian model-derived estimates, and these higher correlations appeared to be directly linked to the higher reliability of the measures. In contrast, between-task correlations remained low regardless of theoretical manipulations or estimation approach. Together, these findings highlight the advantages of Bayesian estimation methods, while also pointing to the important role of reliability in the search for a unified theory of cognitive control.
{"title":"On the psychometric evaluation of cognitive control tasks: An Investigation with the Dual Mechanisms of Cognitive Control (DMCC) battery.","authors":"Jean-Paul Snijder, Rongxiang Tang, Julie M Bugg, Andrew R A Conway, Todd S Braver","doi":"10.3758/s13428-023-02111-7","DOIUrl":"10.3758/s13428-023-02111-7","url":null,"abstract":"<p><p>The domain of cognitive control has been a major focus of experimental, neuroscience, and individual differences research. Currently, however, no theory of cognitive control successfully unifies both experimental and individual differences findings. Some perspectives deny that there even exists a unified psychometric cognitive control construct to be measured at all. These shortcomings of the current literature may reflect the fact that current cognitive control paradigms are optimized for the detection of within-subject experimental effects rather than individual differences. In the current study, we examine the psychometric properties of the Dual Mechanisms of Cognitive Control (DMCC) task battery, which was designed in accordance with a theoretical framework that postulates common sources of within-subject and individual differences variation. We evaluated both internal consistency and test-retest reliability, and for the latter, utilized both classical test theory measures (i.e., split-half methods, intraclass correlation) and newer hierarchical Bayesian estimation of generative models. Although traditional psychometric measures suggested poor reliability, the hierarchical Bayesian models indicated a different pattern, with good to excellent test-retest reliability in almost all tasks and conditions examined. Moreover, within-task, between-condition correlations were generally increased when using the Bayesian model-derived estimates, and these higher correlations appeared to be directly linked to the higher reliability of the measures. In contrast, between-task correlations remained low regardless of theoretical manipulations or estimation approach. Together, these findings highlight the advantages of Bayesian estimation methods, while also pointing to the important role of reliability in the search for a unified theory of cognitive control.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"1604-1639"},"PeriodicalIF":4.6,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10088767/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9289754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-05-01DOI: 10.3758/s13428-023-02131-3
Ran Littman, Shachar Hochman, Eyal Kalanthroff
The affordances task serves as an important tool for the assessment of cognition and visuomotor functioning, and yet its test-retest reliability has not been established. In the affordances task, participants attend to a goal-directed task (e.g., classifying manipulable objects such as cups and pots) while suppressing their stimulus-driven, irrelevant reactions afforded by these objects (e.g., grasping their handles). This results in cognitive conflicts manifesting at the task level and the response level. In the current study, we assessed the reliability of the affordances task for the first time. While doing so, we referred to the "reliability paradox," according to which behavioral tasks that produce highly replicable group-level effects often yield low test-retest reliability due to the inadequacy of traditional correlation methods in capturing individual differences between participants. Alongside the simple test-retest correlations, we employed a Bayesian generative model that was recently demonstrated to result in a more precise estimation of test-retest reliability. Two hundred and ninety-five participants completed an online version of the affordances task twice, with a one-week gap. Performance on the online version replicated results obtained under in-lab administrations of the task. While the simple correlation method resulted in weak test-retest measures of the different effects, the generative model yielded a good reliability assessment. The current results support the utility of the affordances task as a reliable behavioral tool for the assessment of group-level and individual differences in cognitive and visuomotor functioning. The results further support the employment of generative modeling in the study of individual differences.
{"title":"Reliable affordances: A generative modeling approach for test-retest reliability of the affordances task.","authors":"Ran Littman, Shachar Hochman, Eyal Kalanthroff","doi":"10.3758/s13428-023-02131-3","DOIUrl":"10.3758/s13428-023-02131-3","url":null,"abstract":"<p><p>The affordances task serves as an important tool for the assessment of cognition and visuomotor functioning, and yet its test-retest reliability has not been established. In the affordances task, participants attend to a goal-directed task (e.g., classifying manipulable objects such as cups and pots) while suppressing their stimulus-driven, irrelevant reactions afforded by these objects (e.g., grasping their handles). This results in cognitive conflicts manifesting at the task level and the response level. In the current study, we assessed the reliability of the affordances task for the first time. While doing so, we referred to the \"reliability paradox,\" according to which behavioral tasks that produce highly replicable group-level effects often yield low test-retest reliability due to the inadequacy of traditional correlation methods in capturing individual differences between participants. Alongside the simple test-retest correlations, we employed a Bayesian generative model that was recently demonstrated to result in a more precise estimation of test-retest reliability. Two hundred and ninety-five participants completed an online version of the affordances task twice, with a one-week gap. Performance on the online version replicated results obtained under in-lab administrations of the task. While the simple correlation method resulted in weak test-retest measures of the different effects, the generative model yielded a good reliability assessment. The current results support the utility of the affordances task as a reliable behavioral tool for the assessment of group-level and individual differences in cognitive and visuomotor functioning. The results further support the employment of generative modeling in the study of individual differences.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"1984-1993"},"PeriodicalIF":4.6,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10150680/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9405456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-05-23DOI: 10.3758/s13428-023-02135-z
Katharina Lingelbach, Mathias Vukelić, Jochem W Rieger
Since thoroughly validated naturalistic affective German speech stimulus databases are rare, we present here a novel validated database of speech sequences assembled with the purpose of emotion induction. The database comprises 37 audio speech sequences with a total duration of 92 minutes for the induction of positive, neutral, and negative emotion: comedian shows intending to elicit humorous and amusing feelings, weather forecasts, and arguments between couples and relatives from movies or television series. Multiple continuous and discrete ratings are used to validate the database to capture the time course and variabilities of valence and arousal. We analyse and quantify how well the audio sequences fulfil quality criteria of differentiation, salience/strength, and generalizability across participants. Hence, we provide a validated speech database of naturalistic scenarios suitable to investigate emotion processing and its time course with German-speaking participants. Information on using the stimulus database for research purposes can be found at the OSF project repository GAUDIE: https://osf.io/xyr6j/ .
{"title":"GAUDIE: Development, validation, and exploration of a naturalistic German AUDItory Emotional database.","authors":"Katharina Lingelbach, Mathias Vukelić, Jochem W Rieger","doi":"10.3758/s13428-023-02135-z","DOIUrl":"10.3758/s13428-023-02135-z","url":null,"abstract":"<p><p>Since thoroughly validated naturalistic affective German speech stimulus databases are rare, we present here a novel validated database of speech sequences assembled with the purpose of emotion induction. The database comprises 37 audio speech sequences with a total duration of 92 minutes for the induction of positive, neutral, and negative emotion: comedian shows intending to elicit humorous and amusing feelings, weather forecasts, and arguments between couples and relatives from movies or television series. Multiple continuous and discrete ratings are used to validate the database to capture the time course and variabilities of valence and arousal. We analyse and quantify how well the audio sequences fulfil quality criteria of differentiation, salience/strength, and generalizability across participants. Hence, we provide a validated speech database of naturalistic scenarios suitable to investigate emotion processing and its time course with German-speaking participants. Information on using the stimulus database for research purposes can be found at the OSF project repository GAUDIE: https://osf.io/xyr6j/ .</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"2049-2063"},"PeriodicalIF":4.6,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10991051/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9503771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-05-30DOI: 10.3758/s13428-023-02140-2
Christopher Draheim, Jason S Tshukara, Randall W Engle
There is an increasing consensus among researchers that traditional attention tasks do not validly index the attentional mechanisms that they are often used to assess. We recently tested and validated several existing, modified, and new tasks and found that accuracy-based and adaptive tasks were more reliable and valid measures of attention control than traditional ones, which typically rely on speeded responding and/or contrast comparisons in the form of difference scores (Draheim et al. Journal of Experimental Psychology: General, 150(2), 242-275, 2021). With these improved measures, we found that attention control fully mediated the working memory capacity-fluid intelligence relationship, a novel finding that we argued has significant theoretical implications. The present study was both a follow-up and extension to this "toolbox approach" to measuring attention control. Here, we tested updated versions of several attention control tasks in a new dataset (N = 301) and found, with one exception, that these tasks remain strong indicators of attention control. The present study also replicated two important findings: (1) that attention control accounted for nearly all the variance in the relationship between working memory capacity and fluid intelligence, and (2) that the strong association found between attention control and other cognitive measures is not because the attention control tasks place strong demands on processing speed. These findings show that attention control can be measured as a reliable and valid individual differences construct, and that attention control shares substantial variance with other executive functions.
{"title":"Replication and extension of the toolbox approach to measuring attention control.","authors":"Christopher Draheim, Jason S Tshukara, Randall W Engle","doi":"10.3758/s13428-023-02140-2","DOIUrl":"10.3758/s13428-023-02140-2","url":null,"abstract":"<p><p>There is an increasing consensus among researchers that traditional attention tasks do not validly index the attentional mechanisms that they are often used to assess. We recently tested and validated several existing, modified, and new tasks and found that accuracy-based and adaptive tasks were more reliable and valid measures of attention control than traditional ones, which typically rely on speeded responding and/or contrast comparisons in the form of difference scores (Draheim et al. Journal of Experimental Psychology: General, 150(2), 242-275, 2021). With these improved measures, we found that attention control fully mediated the working memory capacity-fluid intelligence relationship, a novel finding that we argued has significant theoretical implications. The present study was both a follow-up and extension to this \"toolbox approach\" to measuring attention control. Here, we tested updated versions of several attention control tasks in a new dataset (N = 301) and found, with one exception, that these tasks remain strong indicators of attention control. The present study also replicated two important findings: (1) that attention control accounted for nearly all the variance in the relationship between working memory capacity and fluid intelligence, and (2) that the strong association found between attention control and other cognitive measures is not because the attention control tasks place strong demands on processing speed. These findings show that attention control can be measured as a reliable and valid individual differences construct, and that attention control shares substantial variance with other executive functions.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"2135-2157"},"PeriodicalIF":4.6,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10228888/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9552856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-04-28DOI: 10.3758/s13428-023-02103-7
Evelien Schat, Francis Tuerlinckx, Bart De Ketelaere, Eva Ceulemans
Retrospective analyses of experience sampling (ESM) data have shown that changes in mean and variance levels may serve as early warning signs of an imminent depression. Detecting such early warning signs prospectively would pave the way for timely intervention and prevention. The exponentially weighted moving average (EWMA) procedure seems a promising method to scan ESM data for the presence of mean changes in real-time. Based on simulation and empirical studies, computing and monitoring day averages using EWMA works particularly well. We therefore expand this idea to the detection of variance changes and propose to use EWMA to prospectively scan for mean changes in day variability statistics (i.e., , , ln( )). When both mean and variance changes are of interest, the multivariate extension of EWMA (MEWMA) can be applied to both the day averages and a day statistic of variability. We evaluate these novel approaches to detecting variance changes by comparing them to EWMA-type procedures that have been specifically developed to detect a combination of mean and variance changes in the raw data: EWMA- , EWMA-ln( ), and EWMA- - . We ran a simulation study to examine the performance of the two approaches in detecting mean, variance, or both types of changes. The results indicate that monitoring day statistics using (M)EWMA works well and outperforms EWMA- and EWMA-ln( ); the performance difference with EWMA- - is smaller but notable. Based on the results, we provide recommendations on which statistic of variability to monitor based on the type of change (i.e., variance increase or decrease) one expects.
经验取样(ESM)数据的回顾性分析表明,平均值和方差水平的变化可作为即将发生抑郁症的预警信号。前瞻性地检测此类预警信号将为及时干预和预防铺平道路。指数加权移动平均(EWMA)程序似乎是一种很有前途的方法,可用于实时扫描 ESM 数据以发现均值变化的存在。根据模拟和经验研究,使用 EWMA 计算和监测日平均值的效果特别好。因此,我们将这一想法扩展到方差变化的检测上,并建议使用 EWMA 对日变化统计量(即 s 2 , s , ln( s ))的均值变化进行前瞻性扫描。当平均值和方差变化都值得关注时,EWMA 的多变量扩展(MEWMA)可同时应用于日平均值和日变异性统计量。我们将这些检测方差变化的新方法与专门为检测原始数据中的均值和方差变化组合而开发的 EWMA 类型程序进行比较,从而对其进行评估:EWMA- S 2、EWMA-ln( S 2 ) 和 EWMA- X ¯ - S 2。我们进行了一项模拟研究,以检验这两种方法在检测均值、方差或两种变化类型方面的性能。结果表明,使用 (M)EWMA 监测日统计量效果良好,优于 EWMA- S 2 和 EWMA-ln( S 2 );与 EWMA- X ¯ - S 2 的性能差异较小,但很明显。基于这些结果,我们根据预期的变化类型(即方差增加或减少),就监测哪种变异性统计量提出了建议。
{"title":"Real-time detection of mean and variance changes in experience sampling data: A comparison of existing and novel statistical process control approaches.","authors":"Evelien Schat, Francis Tuerlinckx, Bart De Ketelaere, Eva Ceulemans","doi":"10.3758/s13428-023-02103-7","DOIUrl":"10.3758/s13428-023-02103-7","url":null,"abstract":"<p><p>Retrospective analyses of experience sampling (ESM) data have shown that changes in mean and variance levels may serve as early warning signs of an imminent depression. Detecting such early warning signs prospectively would pave the way for timely intervention and prevention. The exponentially weighted moving average (EWMA) procedure seems a promising method to scan ESM data for the presence of mean changes in real-time. Based on simulation and empirical studies, computing and monitoring day averages using EWMA works particularly well. We therefore expand this idea to the detection of variance changes and propose to use EWMA to prospectively scan for mean changes in day variability statistics (i.e., <math> <msup><mrow><mi>s</mi></mrow> <mn>2</mn></msup> </math> , <math><mi>s</mi></math> , ln( <math><mi>s</mi></math> )). When both mean and variance changes are of interest, the multivariate extension of EWMA (MEWMA) can be applied to both the day averages and a day statistic of variability. We evaluate these novel approaches to detecting variance changes by comparing them to EWMA-type procedures that have been specifically developed to detect a combination of mean and variance changes in the raw data: EWMA- <math> <msup><mrow><mi>S</mi></mrow> <mn>2</mn></msup> </math> , EWMA-ln( <math> <msup><mrow><mi>S</mi></mrow> <mn>2</mn></msup> </math> ), and EWMA- <math><mover><mi>X</mi> <mo>¯</mo></mover> </math> - <math> <msup><mrow><mi>S</mi></mrow> <mn>2</mn></msup> </math> . We ran a simulation study to examine the performance of the two approaches in detecting mean, variance, or both types of changes. The results indicate that monitoring day statistics using (M)EWMA works well and outperforms EWMA- <math> <msup><mrow><mi>S</mi></mrow> <mn>2</mn></msup> </math> and EWMA-ln( <math> <msup><mrow><mi>S</mi></mrow> <mn>2</mn></msup> </math> ); the performance difference with EWMA- <math><mover><mi>X</mi> <mo>¯</mo></mover> </math> - <math> <msup><mrow><mi>S</mi></mrow> <mn>2</mn></msup> </math> is smaller but notable. Based on the results, we provide recommendations on which statistic of variability to monitor based on the type of change (i.e., variance increase or decrease) one expects.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"1459-1475"},"PeriodicalIF":4.6,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9352850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}