Pub Date : 2026-02-23DOI: 10.3758/s13428-025-02921-x
Ivan Jacob Agaloos Pesigan, Shu Fai Cheung, Huiping Wu, Florbela Chang, Shing On Leung
In structural equation modeling (SEM), one method to select the most plausible model from several candidates, or to compare one or more hypothesized models with similar alternatives on plausibility, is to compare the models using Bayesian posterior probability (BPP). BPP can be computed from the Bayesian information criterion (BIC) scores (Wu et al. Multivariate Behavioral Research, 55(1), 1-16, 2020). This approach complements conventional goodness-of-fit indices such as the Comparative Fit Index (CFI), the root mean square error of approximation (RMSEA), and the standardized root mean square residual (SRMR) in giving concise BPP for assessing uncertainties among all models considered. It can also reveal evidence against a model otherwise hidden by these indices. However, Wu et al. Multivariate Behavioral Research, 55(1), 1-16. (2020) did not provide guidelines on deciding the models that should be considered. To facilitate the use of BPP, we proposed a novel method for selecting this set of models, called neighboring models, to help researchers decide on the initial set. This novel method integrates seamlessly into the typical workflow for SEM analysis. Researchers can fit a model as usual and then use this method to assess whether it is the most plausible model compared with the neighboring models. We believe the proposed method will make it easier for researchers to make better-informed decisions when evaluating their models. We developed a user-friendly R package, modelbpp, to automate all the steps: generating the set of neighboring models, fitting them, and computing the BPPs, all in a single function.
{"title":"How plausible is my model? Assessing model plausibility of structural equation models using Bayesian posterior probabilities (BPP).","authors":"Ivan Jacob Agaloos Pesigan, Shu Fai Cheung, Huiping Wu, Florbela Chang, Shing On Leung","doi":"10.3758/s13428-025-02921-x","DOIUrl":"10.3758/s13428-025-02921-x","url":null,"abstract":"<p><p>In structural equation modeling (SEM), one method to select the most plausible model from several candidates, or to compare one or more hypothesized models with similar alternatives on plausibility, is to compare the models using Bayesian posterior probability (BPP). BPP can be computed from the Bayesian information criterion (BIC) scores (Wu et al. Multivariate Behavioral Research, 55(1), 1-16, 2020). This approach complements conventional goodness-of-fit indices such as the Comparative Fit Index (CFI), the root mean square error of approximation (RMSEA), and the standardized root mean square residual (SRMR) in giving concise BPP for assessing uncertainties among all models considered. It can also reveal evidence against a model otherwise hidden by these indices. However, Wu et al. Multivariate Behavioral Research, 55(1), 1-16. (2020) did not provide guidelines on deciding the models that should be considered. To facilitate the use of BPP, we proposed a novel method for selecting this set of models, called neighboring models, to help researchers decide on the initial set. This novel method integrates seamlessly into the typical workflow for SEM analysis. Researchers can fit a model as usual and then use this method to assess whether it is the most plausible model compared with the neighboring models. We believe the proposed method will make it easier for researchers to make better-informed decisions when evaluating their models. We developed a user-friendly R package, modelbpp, to automate all the steps: generating the set of neighboring models, fitting them, and computing the BPPs, all in a single function.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 3","pages":""},"PeriodicalIF":3.9,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12929293/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147275477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-23DOI: 10.3758/s13428-025-02936-4
Chendong Li, Eunkyeng Baek, Wen Luo
Data generated from single-case experimental designs (SCEDs) are repeated observations on one or a few participants, making multilevel models (MLMs) a useful tool. However, there are two features inherent to SCED data: autocorrelation and small sample sizes. These features result in biased standard errors and inflated type I error rates for fixed effects. Existing commercial statistical programs (for example, SAS) can model first-order autoregressive [AR(1)] residuals and apply small-sample corrections such as Satterthwaite's adjustment, but they are costly and offer no principled test of random effects. Widely used R packages, in contrast, implement either small-sample adjustments or AR(1) structures, but not both. This study aims to (1) evaluate a two-step solution that combines a generalized least squares (GLS) transformation to remove AR(1) residual correlation with Satterthwaite's small-sample adjustment for fixed-effects inference, and (2) implement these methods along with a boundary-corrected restricted likelihood-ratio test and parametric bootstrapping for random effects variance components in a user-friendly R package, lmeSCED. Results from the Monte Carlo simulation study show that applying MLMs to GLS-transformed data recovers true parameter values without bias and keeps type I error rates at nominal levels. We then demonstrate the utility of lmeSCED on an empirical dataset to illustrate its use in practice. The limitations and future directions are also discussed.
{"title":"Generalized least squares transformation for single-case experimental design: Introducing the R package lmeSCED.","authors":"Chendong Li, Eunkyeng Baek, Wen Luo","doi":"10.3758/s13428-025-02936-4","DOIUrl":"https://doi.org/10.3758/s13428-025-02936-4","url":null,"abstract":"<p><p>Data generated from single-case experimental designs (SCEDs) are repeated observations on one or a few participants, making multilevel models (MLMs) a useful tool. However, there are two features inherent to SCED data: autocorrelation and small sample sizes. These features result in biased standard errors and inflated type I error rates for fixed effects. Existing commercial statistical programs (for example, SAS) can model first-order autoregressive [AR(1)] residuals and apply small-sample corrections such as Satterthwaite's adjustment, but they are costly and offer no principled test of random effects. Widely used R packages, in contrast, implement either small-sample adjustments or AR(1) structures, but not both. This study aims to (1) evaluate a two-step solution that combines a generalized least squares (GLS) transformation to remove AR(1) residual correlation with Satterthwaite's small-sample adjustment for fixed-effects inference, and (2) implement these methods along with a boundary-corrected restricted likelihood-ratio test and parametric bootstrapping for random effects variance components in a user-friendly R package, lmeSCED. Results from the Monte Carlo simulation study show that applying MLMs to GLS-transformed data recovers true parameter values without bias and keeps type I error rates at nominal levels. We then demonstrate the utility of lmeSCED on an empirical dataset to illustrate its use in practice. The limitations and future directions are also discussed.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 3","pages":""},"PeriodicalIF":3.9,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147275540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-20DOI: 10.3758/s13428-026-02954-w
Benjamin Sacks, Virginia Ulichney, Anna Duncan, Chelsea Helion, Sarah M Weinstein, Tania Giovannetti, Gus Cooney, Jamie Reilly
Much of our scientific understanding of language processing has been informed by controlled experiments divorced from the real-world demands of naturalistic communication. Conversation requires synchronization of rate, amplitude, lexical complexity, affective coloring, shared reference, and countless other verbal and nonverbal dimensions. Conversation is not merely a vector for information transfer but also serves as a mechanism for establishing or maintaining social relationships. This process of language calibration between interlocutors is known as linguistic alignment. We developed an open-source R package, ConversationAlign, capable of computing novel indices of linguistic alignment and main effects of language use between interlocutors by evaluating word choice across numerous semantic, affective, and lexical dimensions (e.g., valence, concreteness, frequency, word length). We describe the operations of ConversationAlign, including its primary functions of cleaning and transforming raw language data into simultaneous time series objects aggregated by interlocutor, turn, and conversation. We then outline mathematical operations involved in computing complementary indices of linguistic alignment that capture both local (synchrony in turn-by-turn scores) and global relations (overall proximity) between interlocutors. We present a use case of ConversationAlign applied to interview transcripts from American radio legend Terry Gross and her many guests spanning 15 years. We identify caveats for use and potential sources of bias (e.g., polysemy, missing data, robustness to brief language samples) and close with a discussion of potential applications to other populations. ConversationAlign (v 0.4.0) is freely available for download and use via CRAN or GitHub. For technical instructions and download, visit https://github.com/Reilly-ConceptsCognitionLab/ConversationAlign .
{"title":"ConversationAlign: Open-source software for analyzing patterns of lexical use and alignment in conversation transcripts.","authors":"Benjamin Sacks, Virginia Ulichney, Anna Duncan, Chelsea Helion, Sarah M Weinstein, Tania Giovannetti, Gus Cooney, Jamie Reilly","doi":"10.3758/s13428-026-02954-w","DOIUrl":"10.3758/s13428-026-02954-w","url":null,"abstract":"<p><p>Much of our scientific understanding of language processing has been informed by controlled experiments divorced from the real-world demands of naturalistic communication. Conversation requires synchronization of rate, amplitude, lexical complexity, affective coloring, shared reference, and countless other verbal and nonverbal dimensions. Conversation is not merely a vector for information transfer but also serves as a mechanism for establishing or maintaining social relationships. This process of language calibration between interlocutors is known as linguistic alignment. We developed an open-source R package, ConversationAlign, capable of computing novel indices of linguistic alignment and main effects of language use between interlocutors by evaluating word choice across numerous semantic, affective, and lexical dimensions (e.g., valence, concreteness, frequency, word length). We describe the operations of ConversationAlign, including its primary functions of cleaning and transforming raw language data into simultaneous time series objects aggregated by interlocutor, turn, and conversation. We then outline mathematical operations involved in computing complementary indices of linguistic alignment that capture both local (synchrony in turn-by-turn scores) and global relations (overall proximity) between interlocutors. We present a use case of ConversationAlign applied to interview transcripts from American radio legend Terry Gross and her many guests spanning 15 years. We identify caveats for use and potential sources of bias (e.g., polysemy, missing data, robustness to brief language samples) and close with a discussion of potential applications to other populations. ConversationAlign (v 0.4.0) is freely available for download and use via CRAN or GitHub. For technical instructions and download, visit https://github.com/Reilly-ConceptsCognitionLab/ConversationAlign .</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 3","pages":""},"PeriodicalIF":3.9,"publicationDate":"2026-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12923439/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146257273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-20DOI: 10.3758/s13428-025-02917-7
Jingmeng Cui, Gabriela Lunansky, Anna Lichtwarck-Aschoff, Norman B Mendoza, Fred Hasselman
The network theory of psychopathology proposes that mental disorders can be represented as networks of interacting psychiatric symptoms. These direct symptom-symptom interactions can create a vicious cycle of symptom activation, pushing the network to a self-sustaining, dysfunctional phase of psychopathology: a mental disorder. Symptom network models can be estimated from empirical data through statistical models. Although simulation studies have established a relation between the structure of these symptom network models and the probability they end up in a self-sustaining dysfunctional phase, the general stability of the system is left implicit. The general stability includes both the stability of the dysfunctional phase and the stability of the healthy phase. In this paper, we present a novel method to quantify the stability landscapes of network models through stability landscapes. Our method is based on the Hamiltonian of the microstates of Ising models and can be used to show the stability of estimated Ising network models. Compared to simulation-based methods, our approach is computationally more efficient and quantifies the stability of all possible system states. Furthermore, we propose a set of stability metrics to quantify the stability of the healthy and dysfunctional phases and a bootstrapping method for range estimation of the stability metrics. To demonstrate the method's utility, we apply it to an empirical data set and show how it can be used to compare the stability of phases between groups. The presented method is implemented in a freely available R package, Isinglandr.
{"title":"Quantifying the stability landscapes of psychological networks.","authors":"Jingmeng Cui, Gabriela Lunansky, Anna Lichtwarck-Aschoff, Norman B Mendoza, Fred Hasselman","doi":"10.3758/s13428-025-02917-7","DOIUrl":"10.3758/s13428-025-02917-7","url":null,"abstract":"<p><p>The network theory of psychopathology proposes that mental disorders can be represented as networks of interacting psychiatric symptoms. These direct symptom-symptom interactions can create a vicious cycle of symptom activation, pushing the network to a self-sustaining, dysfunctional phase of psychopathology: a mental disorder. Symptom network models can be estimated from empirical data through statistical models. Although simulation studies have established a relation between the structure of these symptom network models and the probability they end up in a self-sustaining dysfunctional phase, the general stability of the system is left implicit. The general stability includes both the stability of the dysfunctional phase and the stability of the healthy phase. In this paper, we present a novel method to quantify the stability landscapes of network models through stability landscapes. Our method is based on the Hamiltonian of the microstates of Ising models and can be used to show the stability of estimated Ising network models. Compared to simulation-based methods, our approach is computationally more efficient and quantifies the stability of all possible system states. Furthermore, we propose a set of stability metrics to quantify the stability of the healthy and dysfunctional phases and a bootstrapping method for range estimation of the stability metrics. To demonstrate the method's utility, we apply it to an empirical data set and show how it can be used to compare the stability of phases between groups. The presented method is implemented in a freely available R package, Isinglandr.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 3","pages":""},"PeriodicalIF":3.9,"publicationDate":"2026-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12923465/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146257219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-17DOI: 10.3758/s13428-026-02948-8
Shanhua Hu, Delaney DuVal, Brielle C Stark, Nazbanou Nozari
Speech errors have been instrumental in advancing our understanding of the architecture of the language production system, the nature of its representations, and its disorders. To be most informative, researchers usually need large amounts of data. Hand-coding such data can be both cumbersome and subjective. This paper presents LeCoder, the first open-source, automated error coder for English word and naming data, which uses a data-driven approach grounded in large-scale corpora to quantify the target-response relationship, allowing it to be flexible, scalable, and generalizable across new datasets. By testing the coder on two datasets from two aphasia labs that have been carefully coded by trained research assistants, we first establish that LeCoder has high accuracy when compared to expert coders, and in certain cases, offers a more logical categorization than human coders. We then show, using robust machine-learning approaches, that LeCoder's performance generalizes to new participants and items it has never encountered before. Collectively, these findings encourage the use of LeCoder across labs for more objective coding of speech errors, which will, in turn, increase replicability of findings in all subfields of research that use speech error analysis, including neuropsychological research.
{"title":"LeCoder: A large-scale automated coder for coding errors in word-production tasks.","authors":"Shanhua Hu, Delaney DuVal, Brielle C Stark, Nazbanou Nozari","doi":"10.3758/s13428-026-02948-8","DOIUrl":"10.3758/s13428-026-02948-8","url":null,"abstract":"<p><p>Speech errors have been instrumental in advancing our understanding of the architecture of the language production system, the nature of its representations, and its disorders. To be most informative, researchers usually need large amounts of data. Hand-coding such data can be both cumbersome and subjective. This paper presents LeCoder, the first open-source, automated error coder for English word and naming data, which uses a data-driven approach grounded in large-scale corpora to quantify the target-response relationship, allowing it to be flexible, scalable, and generalizable across new datasets. By testing the coder on two datasets from two aphasia labs that have been carefully coded by trained research assistants, we first establish that LeCoder has high accuracy when compared to expert coders, and in certain cases, offers a more logical categorization than human coders. We then show, using robust machine-learning approaches, that LeCoder's performance generalizes to new participants and items it has never encountered before. Collectively, these findings encourage the use of LeCoder across labs for more objective coding of speech errors, which will, in turn, increase replicability of findings in all subfields of research that use speech error analysis, including neuropsychological research.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 3","pages":"67"},"PeriodicalIF":3.9,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12913347/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146212120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-17DOI: 10.3758/s13428-025-02937-3
Ignace T C Hooge, Marcus Nyström, Diederick C Niehorster, Richard Andersson, Tom Foulsham, Antje Nuthmann, Roy S Hessels
Researchers use area of interest (AOI) analyses to interpret eye-tracking data. This article addresses four key aspects of AOI use: 1) how to report AOIs to support replicable analyses, 2) how to interpret AOI-related statistics, 3) methods for generating both static and dynamic AOIs, and 4) recent developments and future directions in AOI use. The article underscores the importance of aligning AOI design with the study's conceptual and methodological foundations. It argues that critical decisions, such as the size, shape, and placement of AOIs, should be made early in the experimental design process and should involve eye-tracking data quality, the research question, participant tasks, and the nature of the visual stimulus. It also evaluates recent advances in AOI automation, outlining both their benefits and limitations. The article's main message is that researchers should plan AOIs carefully and explain their choices openly so others can replicate the work.
{"title":"The fundamentals of eye tracking part 6: Working with areas of interest.","authors":"Ignace T C Hooge, Marcus Nyström, Diederick C Niehorster, Richard Andersson, Tom Foulsham, Antje Nuthmann, Roy S Hessels","doi":"10.3758/s13428-025-02937-3","DOIUrl":"10.3758/s13428-025-02937-3","url":null,"abstract":"<p><p>Researchers use area of interest (AOI) analyses to interpret eye-tracking data. This article addresses four key aspects of AOI use: 1) how to report AOIs to support replicable analyses, 2) how to interpret AOI-related statistics, 3) methods for generating both static and dynamic AOIs, and 4) recent developments and future directions in AOI use. The article underscores the importance of aligning AOI design with the study's conceptual and methodological foundations. It argues that critical decisions, such as the size, shape, and placement of AOIs, should be made early in the experimental design process and should involve eye-tracking data quality, the research question, participant tasks, and the nature of the visual stimulus. It also evaluates recent advances in AOI automation, outlining both their benefits and limitations. The article's main message is that researchers should plan AOIs carefully and explain their choices openly so others can replicate the work.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 3","pages":"65"},"PeriodicalIF":3.9,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12913287/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146212083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-17DOI: 10.3758/s13428-026-02952-y
Aini Li, Meredith Tamminga
Spoken language is highly variable, as words can have different pronunciation variants. A growing body of psycholinguistic research has employed experimental methods such as explicit rating tasks to obtain user biases toward different pronunciation variants. However, no prior work has empirically validated whether experimentally elicited user estimates accurately reflect real-world usage patterns. By correlating user estimates and conversational speech data for English variable ING pronunciations under different experimental prompts, we found that while rating tasks can provide word biases that do correlate significantly with corpus word biases, the correlations are only modest and there are asymmetries in the relationship between elicited word biases and corpus word biases. These findings call for future research to incorporate word biases into the study of sociolinguistic variation and language processing.
{"title":"Validating explicit rating tasks for measuring pronunciation biases: A case study of ING variation.","authors":"Aini Li, Meredith Tamminga","doi":"10.3758/s13428-026-02952-y","DOIUrl":"10.3758/s13428-026-02952-y","url":null,"abstract":"<p><p>Spoken language is highly variable, as words can have different pronunciation variants. A growing body of psycholinguistic research has employed experimental methods such as explicit rating tasks to obtain user biases toward different pronunciation variants. However, no prior work has empirically validated whether experimentally elicited user estimates accurately reflect real-world usage patterns. By correlating user estimates and conversational speech data for English variable ING pronunciations under different experimental prompts, we found that while rating tasks can provide word biases that do correlate significantly with corpus word biases, the correlations are only modest and there are asymmetries in the relationship between elicited word biases and corpus word biases. These findings call for future research to incorporate word biases into the study of sociolinguistic variation and language processing.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 3","pages":"66"},"PeriodicalIF":3.9,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12913361/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146212080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-13DOI: 10.3758/s13428-025-02841-w
Guangyu Zhu, Li Qian Tay, Mengyan Zhang
Understanding causality and the mechanisms underlying psychological phenomena has been a cornerstone of psychological research with significant implications for theory development and intervention design. While traditional methods such as experimental manipulations or structural equation modelling have been extensively used to explore causal relationships, recent advances in computational techniques have introduced causal discovery methods as a powerful alternative. These methods can uncover complex causal network structures from observational or interventional data, enabling the identification of causal directions in intricate interdependencies involving numerous variables. Building on a growing body of literature, this paper provides a comprehensive survey of core causal discovery algorithms and their recent applications across various disciplines, with a particular focus on their use in uncovering psychological mechanisms. To complement this overview, we provide a tutorial using data from the Health Behavior in School-Aged Children (HBSC) study. This case study demonstrates how causal discovery can be applied to examine gender-specific mechanisms underlying bullying-related outcomes. We also discuss the opportunities and challenges of integrating causal discovery into psychological research.
{"title":"Causal discovery methods in psychological research: Foundations, algorithms, and a practical tutorial in R.","authors":"Guangyu Zhu, Li Qian Tay, Mengyan Zhang","doi":"10.3758/s13428-025-02841-w","DOIUrl":"10.3758/s13428-025-02841-w","url":null,"abstract":"<p><p>Understanding causality and the mechanisms underlying psychological phenomena has been a cornerstone of psychological research with significant implications for theory development and intervention design. While traditional methods such as experimental manipulations or structural equation modelling have been extensively used to explore causal relationships, recent advances in computational techniques have introduced causal discovery methods as a powerful alternative. These methods can uncover complex causal network structures from observational or interventional data, enabling the identification of causal directions in intricate interdependencies involving numerous variables. Building on a growing body of literature, this paper provides a comprehensive survey of core causal discovery algorithms and their recent applications across various disciplines, with a particular focus on their use in uncovering psychological mechanisms. To complement this overview, we provide a tutorial using data from the Health Behavior in School-Aged Children (HBSC) study. This case study demonstrates how causal discovery can be applied to examine gender-specific mechanisms underlying bullying-related outcomes. We also discuss the opportunities and challenges of integrating causal discovery into psychological research.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 2","pages":"64"},"PeriodicalIF":3.9,"publicationDate":"2026-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12904928/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-13DOI: 10.3758/s13428-025-02933-7
Vanessa Cheung, Maximilian Maier, Falk Lieder
Understanding how people make decisions in specific situations is a central challenge in (moral) psychology research. Yet there are no existing self-report scales for measuring the process of decision-making in individual dilemmas (as opposed to general moral attitudes or beliefs about moral decision-making). We address this gap by devising new self-report measures of several of the processes by which people make moral decisions and validate them using realistic moral dilemmas, including six new vignettes that we developed. The resulting 12-item Decision Process Scale (DPS) can be used to measure how much people rely on rules versus cost-benefit reasoning and how much they rely on intuition versus deliberation in the specific moral dilemmas they face in a laboratory experiment or in the real world.
{"title":"The Decision Process Scale (DPS): Self-report measures of reliance on rules, cost-benefit reasoning, intuition, and deliberation in (moral) decision-making.","authors":"Vanessa Cheung, Maximilian Maier, Falk Lieder","doi":"10.3758/s13428-025-02933-7","DOIUrl":"10.3758/s13428-025-02933-7","url":null,"abstract":"<p><p>Understanding how people make decisions in specific situations is a central challenge in (moral) psychology research. Yet there are no existing self-report scales for measuring the process of decision-making in individual dilemmas (as opposed to general moral attitudes or beliefs about moral decision-making). We address this gap by devising new self-report measures of several of the processes by which people make moral decisions and validate them using realistic moral dilemmas, including six new vignettes that we developed. The resulting 12-item Decision Process Scale (DPS) can be used to measure how much people rely on rules versus cost-benefit reasoning and how much they rely on intuition versus deliberation in the specific moral dilemmas they face in a laboratory experiment or in the real world.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 2","pages":"62"},"PeriodicalIF":3.9,"publicationDate":"2026-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12904929/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-13DOI: 10.3758/s13428-026-02942-0
Shovan Chowdhury, Marco Marozzi, Freddy Hernández-Barajas, Fernando Marmolejo-Ramos
The exponential distribution has been used for modeling positively skewed data in biological psychology. However, the lesser-known Lindley distribution, although not typically used for this purpose, has a density and cumulative distribution that are very similar to those of the exponential distribution. This similarity suggests that the Lindley distribution could be a strong candidate for modeling such data types. While the probability density and cumulative distribution functions of these two one-parameter distributions can be quite similar, their hazard rate functions differ markedly. Therefore, selecting the most appropriate distribution significantly impacts the interpretation of the hazard rate function. To aid in this selection, we introduce a method that distinguishes between the exponential and Lindley distributions by examining the ratio of their maximized likelihood functions. This method is versatile, as it can also be applied to type I censored data, enhancing its practical appeal. Asymptotic results are analytically derived. We conducted a simulation study to demonstrate the method's effectiveness, even with small sample sizes. Furthermore, we illustrate the method's application using a published dataset from biological psychology and provide an implementation as an R function.
{"title":"Distinguishing between the exponential and Lindley distributions: An illustration from biological psychology.","authors":"Shovan Chowdhury, Marco Marozzi, Freddy Hernández-Barajas, Fernando Marmolejo-Ramos","doi":"10.3758/s13428-026-02942-0","DOIUrl":"10.3758/s13428-026-02942-0","url":null,"abstract":"<p><p>The exponential distribution has been used for modeling positively skewed data in biological psychology. However, the lesser-known Lindley distribution, although not typically used for this purpose, has a density and cumulative distribution that are very similar to those of the exponential distribution. This similarity suggests that the Lindley distribution could be a strong candidate for modeling such data types. While the probability density and cumulative distribution functions of these two one-parameter distributions can be quite similar, their hazard rate functions differ markedly. Therefore, selecting the most appropriate distribution significantly impacts the interpretation of the hazard rate function. To aid in this selection, we introduce a method that distinguishes between the exponential and Lindley distributions by examining the ratio of their maximized likelihood functions. This method is versatile, as it can also be applied to type I censored data, enhancing its practical appeal. Asymptotic results are analytically derived. We conducted a simulation study to demonstrate the method's effectiveness, even with small sample sizes. Furthermore, we illustrate the method's application using a published dataset from biological psychology and provide an implementation as an R function.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"58 2","pages":"63"},"PeriodicalIF":3.9,"publicationDate":"2026-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12904947/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}