首页 > 最新文献

Psychological methods最新文献

英文 中文
Handling missing data in partially clustered randomized controlled trials. 处理部分聚类随机对照试验中缺失的数据。
IF 7.8 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-10-01 Epub Date: 2023-11-06 DOI: 10.1037/met0000612
Manshu Yang, Darrell J Gaskin

Partially clustered designs are widely used in psychological research, especially in randomized controlled trials that examine the effectiveness of prevention or intervention strategies. In a partially clustered trial, individuals are clustered into intervention groups in one or more study arms, for the purpose of intervention delivery, whereas individuals in other arms (e.g., the waitlist control arm) are unclustered. Missing data are almost inevitable in partially clustered trials and could pose a major challenge in drawing valid research conclusions. This article focuses on handling auxiliary-variable-dependent missing at random data in partially clustered studies. Five methods were compared via a simulation study, including simultaneous multiple imputation using joint modeling (MI-JM-SIM), arm-specific multiple imputation using joint modeling (MI-JM-AS), arm-specific multiple imputation using substantive-model-compatible sequential modeling (MI-SMC-AS), sequential fully Bayesian estimation using noninformative priors (SFB-NON), and sequential fully Bayesian estimation using weakly informative priors (SFB-WEAK). The results suggest that the MI-JM-AS method outperformed other methods when the variables with missing values only involved fixed effects, whereas the MI-SMC-AS method was preferred if the incomplete variables featured random effects. Applications of different methods are also illustrated using an empirical data example. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

部分聚类设计广泛应用于心理学研究,尤其是在检查预防或干预策略有效性的随机对照试验中。在部分集群试验中,为了提供干预,将一个或多个研究组的个体分为干预组,而其他组(如等待名单对照组)的个体则不集群。在部分聚集性试验中,数据缺失几乎是不可避免的,这可能对得出有效的研究结论构成重大挑战。本文的重点是处理部分聚类研究中随机数据的辅助变量相关缺失。通过模拟研究比较了五种方法,包括使用联合建模的同时多重插补(MI-JM-SIM)、使用联合建模(MI-JM-AS)的手臂特定多重插补、使用实质性模型兼容序列建模的手臂特定多元插补(MI-SMC-AS)、使用非形成性先验的序列完全贝叶斯估计(SFB-NON),以及使用弱信息先验的顺序完全贝叶斯估计(SFB-WEAK)。结果表明,当具有缺失值的变量仅涉及固定效应时,MI-JM-AS方法优于其他方法,而如果不完全变量具有随机效应,则MI-SMC-AS方法更可取。并以实证数据为例说明了不同方法的应用。(PsycInfo数据库记录(c)2023 APA,保留所有权利)。
{"title":"Handling missing data in partially clustered randomized controlled trials.","authors":"Manshu Yang, Darrell J Gaskin","doi":"10.1037/met0000612","DOIUrl":"10.1037/met0000612","url":null,"abstract":"<p><p>Partially clustered designs are widely used in psychological research, especially in randomized controlled trials that examine the effectiveness of prevention or intervention strategies. In a partially clustered trial, individuals are clustered into intervention groups in one or more study arms, for the purpose of intervention delivery, whereas individuals in other arms (e.g., the waitlist control arm) are unclustered. Missing data are almost inevitable in partially clustered trials and could pose a major challenge in drawing valid research conclusions. This article focuses on handling auxiliary-variable-dependent missing at random data in partially clustered studies. Five methods were compared via a simulation study, including simultaneous multiple imputation using joint modeling (MI-JM-SIM), arm-specific multiple imputation using joint modeling (MI-JM-AS), arm-specific multiple imputation using substantive-model-compatible sequential modeling (MI-SMC-AS), sequential fully Bayesian estimation using noninformative priors (SFB-NON), and sequential fully Bayesian estimation using weakly informative priors (SFB-WEAK). The results suggest that the MI-JM-AS method outperformed other methods when the variables with missing values only involved fixed effects, whereas the MI-SMC-AS method was preferred if the incomplete variables featured random effects. Applications of different methods are also illustrated using an empirical data example. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"927-948"},"PeriodicalIF":7.8,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11906213/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71485253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Questionable research practices and cumulative science: The consequences of selective reporting on effect size bias and heterogeneity. 有问题的研究实践和累积科学:选择性报告对效应大小偏差和异质性的影响。
IF 7.8 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-10-01 Epub Date: 2023-03-23 DOI: 10.1037/met0000572
Samantha F Anderson, Xinran Liu

Despite increased attention to open science and transparency, questionable research practices (QRPs) remain common, and studies published using QRPs will remain a part of the published record for some time. A particularly common type of QRP involves multiple testing, and in some forms of this, researchers report only a selection of the tests conducted. Methodological investigations of multiple testing and QRPs have often focused on implications for a single study, as well as how these practices can increase the likelihood of false positive results. However, it is illuminating to consider the role of these QRPs from a broader, literature-wide perspective, focusing on consequences that affect the interpretability of results across the literature. In this article, we use a Monte Carlo simulation study to explore the consequences of two QRPs involving multiple testing, cherry picking and question trolling, on effect size bias and heterogeneity among effect sizes. Importantly, we explicitly consider the role of real-world conditions, including sample size, effect size, and publication bias, that amend the influence of these QRPs. Results demonstrated that QRPs can substantially affect both bias and heterogeneity, although there were many nuances, particularly relating to the influence of publication bias, among other factors. The present study adds a new perspective to how QRPs may influence researchers' ability to evaluate a literature accurately and cumulatively, and points toward yet another reason to continue to advocate for initiatives that reduce QRPs. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

尽管人们越来越关注开放科学和透明度,但有问题的研究实践(qrp)仍然很常见,使用qrp发表的研究在一段时间内仍将是已发表记录的一部分。一种特别常见的QRP类型涉及多次测试,在某些形式的测试中,研究人员只报告了所进行测试的一部分。多次测试和qrp的方法学调查通常侧重于对单一研究的影响,以及这些做法如何增加假阳性结果的可能性。然而,从更广泛的、文献范围内的角度来考虑这些qrp的作用是有启发性的,重点是影响文献中结果的可解释性的后果。在本文中,我们使用蒙特卡罗模拟研究来探讨涉及多重测试的两个qrp,樱桃挑选和问题trolling,对效应大小偏差和效应大小异质性的影响。重要的是,我们明确考虑了现实世界条件的作用,包括样本量、效应量和发表偏倚,这些条件可以修正这些qrp的影响。结果表明,尽管存在许多细微差别,特别是与发表偏倚的影响有关,但qrp可以实质上影响偏倚和异质性。目前的研究为qrp如何影响研究人员准确和累积评估文献的能力提供了一个新的视角,并指出了继续倡导减少qrp的另一个原因。(PsycInfo Database Record (c) 2025 APA,版权所有)。
{"title":"Questionable research practices and cumulative science: The consequences of selective reporting on effect size bias and heterogeneity.","authors":"Samantha F Anderson, Xinran Liu","doi":"10.1037/met0000572","DOIUrl":"10.1037/met0000572","url":null,"abstract":"<p><p>Despite increased attention to open science and transparency, questionable research practices (QRPs) remain common, and studies published using QRPs will remain a part of the published record for some time. A particularly common type of QRP involves multiple testing, and in some forms of this, researchers report only a selection of the tests conducted. Methodological investigations of multiple testing and QRPs have often focused on implications for a single study, as well as how these practices can increase the likelihood of false positive results. However, it is illuminating to consider the role of these QRPs from a broader, literature-wide perspective, focusing on consequences that affect the interpretability of results across the literature. In this article, we use a Monte Carlo simulation study to explore the consequences of two QRPs involving multiple testing, cherry picking and question trolling, on effect size bias and heterogeneity among effect sizes. Importantly, we explicitly consider the role of real-world conditions, including sample size, effect size, and publication bias, that amend the influence of these QRPs. Results demonstrated that QRPs can substantially affect both bias and heterogeneity, although there were many nuances, particularly relating to the influence of publication bias, among other factors. The present study adds a new perspective to how QRPs may influence researchers' ability to evaluate a literature accurately and cumulatively, and points toward yet another reason to continue to advocate for initiatives that reduce QRPs. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1017-1042"},"PeriodicalIF":7.8,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9367002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Individual-level probabilities and cluster-level proportions: Toward interpretable level 2 estimates in unconflated multilevel models for binary outcomes. 个体水平的概率和聚类水平的比例:在二元结果的非膨胀多层次模型中实现可解释的第 2 层估计。
IF 7.8 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-10-01 Epub Date: 2024-02-08 DOI: 10.1037/met0000646
Timothy Hayes

Multilevel models allow researchers to test hypotheses at multiple levels of analysis-for example, assessing the effects of both individual-level and school-level predictors on a target outcome. To assess these effects with the greatest clarity, researchers are well-advised to cluster mean center all Level 1 predictors and explicitly incorporate the cluster means into the model at Level 2. When an outcome of interest is continuous, this unconflated model specification serves both to increase model accuracy, by separating the level-specific effects of each predictor, and to increase model interpretability, by reframing the random intercepts as unadjusted cluster means. When an outcome of interest is binary or ordinal, however, only the first of these benefits is fully realized: In these models, the intuitive cluster mean interpretations of Level 2 effects are only available on the metric of the linear predictor (e.g., the logit) or, equivalently, the latent response propensity, yij∗. Because the calculations for obtaining predicted probabilities, odds, and ORs operate on the entire combined model equation, the interpretations of these quantities are inextricably tied to individual-level, rather than cluster-level, outcomes. This is unfortunate, given that the probability and odds metrics are often of greatest interest to researchers in practice. To address this issue, I propose a novel rescaling method designed to calculate cluster average success proportions, odds, and ORs in two-level binary and ordinal logistic and probit models. I apply the approach to a real data example and provide supplemental R functions to help users implement the method easily. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

多层次模型允许研究人员在多个分析层次上检验假设--例如,评估个人层次和学校 层次的预测因素对目标结果的影响。为了最清晰地评估这些影响,研究人员最好对所有一级预测因子的平均值进行聚类,并将聚类平均值明确纳入二级模型。如果所关注的结果是连续的,这种非膨胀模型规格既可以通过分离每个预测因子的特定水平效应来提高模型的准确性,又可以通过将随机截距重构为未经调整的聚类均值来提高模型的可解释性。然而,当所关注的结果是二元或序数结果时,只有第一个好处才能充分实现:在这些模型中,第 2 层次效应的直观聚类平均值解释只适用于线性预测因子(如 logit)或潜在反应倾向 yij∗ 的度量。由于预测概率、几率和或然率的计算是在整个组合模型方程上进行的,因此这些量的解释与个 人层面而非集群层面的结果有着千丝万缕的联系。鉴于概率和几率指标往往是研究人员在实践中最感兴趣的指标,这种情况令人遗憾。为了解决这个问题,我提出了一种新颖的重新缩放方法,旨在计算两级二元和序数逻辑和 probit 模型中的群组平均成功比例、几率和 OR。我将该方法应用于一个真实数据示例,并提供了补充 R 函数,以帮助用户轻松实施该方法。(PsycInfo 数据库记录 (c) 2024 APA,保留所有权利)。
{"title":"Individual-level probabilities and cluster-level proportions: Toward interpretable level 2 estimates in unconflated multilevel models for binary outcomes.","authors":"Timothy Hayes","doi":"10.1037/met0000646","DOIUrl":"10.1037/met0000646","url":null,"abstract":"<p><p>Multilevel models allow researchers to test hypotheses at multiple levels of analysis-for example, assessing the effects of both individual-level and school-level predictors on a target outcome. To assess these effects with the greatest clarity, researchers are well-advised to cluster mean center all Level 1 predictors and explicitly incorporate the cluster means into the model at Level 2. When an outcome of interest is continuous, this unconflated model specification serves both to increase model accuracy, by separating the level-specific effects of each predictor, and to increase model interpretability, by reframing the random intercepts as unadjusted cluster means. When an outcome of interest is binary or ordinal, however, only the first of these benefits is fully realized: In these models, the intuitive cluster mean interpretations of Level 2 effects are only available on the metric of the linear predictor (e.g., the logit) or, equivalently, the latent response propensity, <i>y</i><sub>ij</sub>∗. Because the calculations for obtaining predicted probabilities, odds, and <i>OR</i>s operate on the entire combined model equation, the interpretations of these quantities are inextricably tied to individual-level, rather than cluster-level, outcomes. This is unfortunate, given that the probability and odds metrics are often of greatest interest to researchers in practice. To address this issue, I propose a novel rescaling method designed to calculate cluster average success proportions, odds, and <i>OR</i>s in two-level binary and ordinal logistic and probit models. I apply the approach to a real data example and provide supplemental R functions to help users implement the method easily. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1113-1132"},"PeriodicalIF":7.8,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139707688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data aggregation can lead to biased inferences in Bayesian linear mixed models and Bayesian analysis of variance. 在贝叶斯线性混合模型和贝叶斯方差分析中,数据聚合可能导致有偏差的推论。
IF 7.8 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-10-01 Epub Date: 2024-01-25 DOI: 10.1037/met0000621
Daniel J Schad, Bruno Nicenboim, Shravan Vasishth

Bayesian linear mixed-effects models (LMMs) and Bayesian analysis of variance (ANOVA) are increasingly being used in the cognitive sciences to perform null hypothesis tests, where a null hypothesis that an effect is zero is compared with an alternative hypothesis that the effect exists and is different from zero. While software tools for Bayes factor null hypothesis tests are easily accessible, how to specify the data and the model correctly is often not clear. In Bayesian approaches, many authors use data aggregation at the by-subject level and estimate Bayes factors on aggregated data. Here, we use simulation-based calibration for model inference applied to several example experimental designs to demonstrate that, as with frequentist analysis, such null hypothesis tests on aggregated data can be problematic in Bayesian analysis. Specifically, when random slope variances differ (i.e., violated sphericity assumption), Bayes factors are too conservative for contrasts where the variance is small and they are too liberal for contrasts where the variance is large. Running Bayesian ANOVA on aggregated data can-if the sphericity assumption is violated-likewise lead to biased Bayes factor results. Moreover, Bayes factors for by-subject aggregated data are biased (too liberal) when random item slope variance is present but ignored in the analysis. These problems can be circumvented or reduced by running Bayesian LMMs on nonaggregated data such as on individual trials, and by explicitly modeling the full random effects structure. Reproducible code is available from https://osf.io/mjf47/. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

贝叶斯线性混合效应模型(LMMs)和贝叶斯方差分析(ANOVA)越来越多地用于认知科学中的零假设检验,即把效应为零的零假设与效应存在且不同于零的备择假设进行比较。虽然贝叶斯因子零假设检验的软件工具很容易获得,但如何正确指定数据和模型往往并不清楚。在贝叶斯方法中,许多作者使用按受试者水平进行数据聚合,并对聚合数据进行贝叶斯因子估计。在此,我们使用基于模拟的模型推断校准,并将其应用于几个实验设计实例,以证明与频数分析一样,在贝叶斯分析中,这种对聚合数据的零假设检验也可能存在问题。具体来说,当随机斜率方差不同时(即违反球形性假设),贝叶斯因子对于方差较小的对比过于保守,而对于方差较大的对比则过于宽松。如果违反了球形性假设,那么在汇总数据上运行贝叶斯方差分析同样会导致贝叶斯因子结果出现偏差。此外,当随机项目斜率方差存在但在分析中被忽略时,分项目汇总数据的贝叶斯因子也会出现偏差(过于宽松)。这些问题可以通过在非汇总数据(如单个试验)上运行贝叶斯 LMM,以及明确建立完整的随机效应结构模型来规避或减少。可从 https://osf.io/mjf47/ 获取可复制的代码。(PsycInfo 数据库记录 (c) 2024 APA,保留所有权利)。
{"title":"Data aggregation can lead to biased inferences in Bayesian linear mixed models and Bayesian analysis of variance.","authors":"Daniel J Schad, Bruno Nicenboim, Shravan Vasishth","doi":"10.1037/met0000621","DOIUrl":"10.1037/met0000621","url":null,"abstract":"<p><p>Bayesian linear mixed-effects models (LMMs) and Bayesian analysis of variance (ANOVA) are increasingly being used in the cognitive sciences to perform null hypothesis tests, where a null hypothesis that an effect is zero is compared with an alternative hypothesis that the effect exists and is different from zero. While software tools for Bayes factor null hypothesis tests are easily accessible, how to specify the data and the model correctly is often not clear. In Bayesian approaches, many authors use data aggregation at the by-subject level and estimate Bayes factors on aggregated data. Here, we use simulation-based calibration for model inference applied to several example experimental designs to demonstrate that, as with frequentist analysis, such null hypothesis tests on aggregated data can be problematic in Bayesian analysis. Specifically, when random slope variances differ (i.e., violated sphericity assumption), Bayes factors are too conservative for contrasts where the variance is small and they are too liberal for contrasts where the variance is large. Running Bayesian ANOVA on aggregated data can-if the sphericity assumption is violated-likewise lead to biased Bayes factor results. Moreover, Bayes factors for by-subject aggregated data are biased (too liberal) when random item slope variance is present but ignored in the analysis. These problems can be circumvented or reduced by running Bayesian LMMs on nonaggregated data such as on individual trials, and by explicitly modeling the full random effects structure. Reproducible code is available from https://osf.io/mjf47/. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1133-1168"},"PeriodicalIF":7.8,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139564771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A simple Monte Carlo method for estimating power in multilevel designs. 一个简单的蒙特卡罗方法估计功率在多电平设计。
IF 7.8 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-10-01 Epub Date: 2023-11-13 DOI: 10.1037/met0000614
Craig K Enders, Brian T Keller, Michael P Woller

Estimating power for multilevel models is complex because there are many moving parts, several sources of variation to consider, and unique sample sizes at Level 1 and Level 2. Monte Carlo computer simulation is a flexible tool that has received considerable attention in the literature. However, much of the work to date has focused on very simple models with one predictor at each level and one cross-level interaction effect, and approaches that do not share this limitation require users to specify a large set of population parameters. The goal of this tutorial is to describe a flexible Monte Carlo approach that accommodates a broad class of multilevel regression models with continuous outcomes. Our tutorial makes three important contributions. First, it allows any number of within-cluster effects, between-cluster effects, covariate effects at either level, cross-level interactions, and random coefficients. Moreover, we do not assume orthogonal effects, and predictors can correlate at either level. Second, our approach accommodates models with multiple interaction effects, and it does so with exact expressions for the variances and covariances of product random variables. Finally, our strategy for deriving hypothetical population parameters does not require pilot or comparable data. Instead, we use intuitive variance-explained effect size expressions to reverse-engineer solutions for the regression coefficients and variance components. We describe a new R package mlmpower that computes these solutions and automates the process of generating artificial data sets and summarizing the simulation results. The online supplemental materials provide detailed vignettes that annotate the R scripts and resulting output. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

多层模型的功率估计是复杂的,因为有许多活动部件,需要考虑几个变化源,并且在第一级和第二级有独特的样本量。蒙特卡罗计算机模拟是一种灵活的工具,在文献中受到了相当大的关注。然而,迄今为止的大部分工作都集中在非常简单的模型上,每个级别有一个预测器和一个跨级别交互效应,而没有这种限制的方法要求用户指定一组大的总体参数。本教程的目标是描述一种灵活的蒙特卡罗方法,该方法可容纳具有连续结果的广泛类别的多层回归模型。我们的教程有三个重要贡献。首先,它允许任意数量的集群内效应、集群间效应、任意水平上的协变量效应、跨水平的相互作用和随机系数。此外,我们没有假设正交效应,预测因子可以在任何一个水平上相关。其次,我们的方法适用于具有多种相互作用效应的模型,并且它具有产品随机变量的方差和协方差的精确表达式。最后,我们推导假设总体参数的策略不需要试点或可比数据。相反,我们使用直观的方差解释效应大小表达式来对回归系数和方差成分进行逆向工程解决方案。我们描述了一个新的R包mlmpower,它可以计算这些解,并自动生成人工数据集和总结仿真结果。在线补充材料提供了详细的注释R脚本和结果输出的小插图。(PsycInfo数据库记录(c) 2023 APA,版权所有)。
{"title":"A simple Monte Carlo method for estimating power in multilevel designs.","authors":"Craig K Enders, Brian T Keller, Michael P Woller","doi":"10.1037/met0000614","DOIUrl":"10.1037/met0000614","url":null,"abstract":"<p><p>Estimating power for multilevel models is complex because there are many moving parts, several sources of variation to consider, and unique sample sizes at Level 1 and Level 2. Monte Carlo computer simulation is a flexible tool that has received considerable attention in the literature. However, much of the work to date has focused on very simple models with one predictor at each level and one cross-level interaction effect, and approaches that do not share this limitation require users to specify a large set of population parameters. The goal of this tutorial is to describe a flexible Monte Carlo approach that accommodates a broad class of multilevel regression models with continuous outcomes. Our tutorial makes three important contributions. First, it allows any number of within-cluster effects, between-cluster effects, covariate effects at either level, cross-level interactions, and random coefficients. Moreover, we do not assume orthogonal effects, and predictors can correlate at either level. Second, our approach accommodates models with multiple interaction effects, and it does so with exact expressions for the variances and covariances of product random variables. Finally, our strategy for deriving hypothetical population parameters does not require pilot or comparable data. Instead, we use intuitive variance-explained effect size expressions to reverse-engineer solutions for the regression coefficients and variance components. We describe a new R package mlmpower that computes these solutions and automates the process of generating artificial data sets and summarizing the simulation results. The online supplemental materials provide detailed vignettes that annotate the R scripts and resulting output. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"980-996"},"PeriodicalIF":7.8,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92156263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empirical selection of referent variables: Comparing multiple-indicator multiple-cause-interaction modeling and moderated nonlinear factor analysis. 参考变量的实证选择:多指标多因交互模型与有调节非线性因子分析的比较。
IF 7.8 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-10-01 Epub Date: 2023-11-13 DOI: 10.1037/met0000613
Cheng-Hsien Li

The fulfillment of measurement invariance/equivalence is considered a prerequisite for meaningfully proceeding with substantive cross-group comparisons. In the multiple-group confirmatory factor analysis approach, one model identification issue has unfortunately received little attention: the specification of a referent variable in the test of measurement invariance. A multiple-indicator multiple-cause (MIMIC) model with moderated effects (i.e., MIMIC-interaction modeling; Woods & Grimm, 2011) and a moderated nonlinear factor analysis (MNLFA; Bauer, 2017) model for detecting uniform and nonuniform measurement inequivalences in tandem were proposed to identify credible referent variables. The performance of two search strategies, constrained and free baseline models, and MIMIC-interaction and MNLFA methodologies were evaluated in a Monte Carlo simulation. Effects of different configurations of the number of inequivalent variables, type and magnitude of inequivalence, magnitude of group differences in factor means and variances, and sample size in combination with each search strategy were determined. Results showed that the constrained baseline model strategy generally outperformed the free baseline model strategy for identifying credible referent variables, functioning well when up to one-third of the observed variables were noninvariant. Moreover, MNLFA performed better than MIMIC-interaction modeling for the selection of referent variables across nearly all conditions investigated in the study. The superiority of MNLFA over MIMIC-interaction modeling was specifically evident in the models with relatively small samples, large between-group latent variance differences, or a combination of both. An empirical example was presented to demonstrate the applicability of MNLFA with the constrained baseline model strategy for referent variable selection. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

测量不变性/等效性的实现被认为是有意义地进行实质性跨组比较的先决条件。在多组验证性因子分析方法中,不幸的是,一个模型识别问题很少受到关注:测量不变性检验中参考变量的规范。具有调节效应的多指标多原因(MIMIC)模型(即mimi -相互作用模型;Woods & Grimm, 2011)和一个有调节的非线性因子分析(MNLFA;Bauer, 2017)提出了串联检测均匀和非均匀测量不等价的模型,以识别可信的参考变量。在蒙特卡罗仿真中评估了约束基线和自由基线模型两种搜索策略以及MIMIC-interaction和MNLFA方法的性能。确定不同配置的不等价变量的数量、不等价的类型和大小、因子均值和方差的组差异大小以及样本量与每种搜索策略的影响。结果表明,在识别可信参考变量方面,约束基线模型策略总体上优于自由基线模型策略,当多达三分之一的观察变量是非不变变量时,该策略效果良好。此外,在研究中调查的几乎所有条件下,MNLFA在选择参考变量方面的表现优于mimic -相互作用模型。在样本相对较小、组间潜在方差差异较大或两者兼而有之的模型中,MNLFA优于MIMIC-interaction模型的优势尤为明显。通过实例验证了MNLFA与约束基线模型策略在参考变量选择中的适用性。(PsycInfo数据库记录(c) 2023 APA,版权所有)。
{"title":"Empirical selection of referent variables: Comparing multiple-indicator multiple-cause-interaction modeling and moderated nonlinear factor analysis.","authors":"Cheng-Hsien Li","doi":"10.1037/met0000613","DOIUrl":"10.1037/met0000613","url":null,"abstract":"<p><p>The fulfillment of measurement invariance/equivalence is considered a prerequisite for meaningfully proceeding with substantive cross-group comparisons. In the multiple-group confirmatory factor analysis approach, one model identification issue has unfortunately received little attention: the specification of a referent variable in the test of measurement invariance. A multiple-indicator multiple-cause (MIMIC) model with moderated effects (i.e., MIMIC-interaction modeling; Woods & Grimm, 2011) and a moderated nonlinear factor analysis (MNLFA; Bauer, 2017) model for detecting uniform and nonuniform measurement inequivalences in tandem were proposed to identify credible referent variables. The performance of two search strategies, constrained and free baseline models, and MIMIC-interaction and MNLFA methodologies were evaluated in a Monte Carlo simulation. Effects of different configurations of the number of inequivalent variables, type and magnitude of inequivalence, magnitude of group differences in factor means and variances, and sample size in combination with each search strategy were determined. Results showed that the constrained baseline model strategy generally outperformed the free baseline model strategy for identifying credible referent variables, functioning well when up to one-third of the observed variables were noninvariant. Moreover, MNLFA performed better than MIMIC-interaction modeling for the selection of referent variables across nearly all conditions investigated in the study. The superiority of MNLFA over MIMIC-interaction modeling was specifically evident in the models with relatively small samples, large between-group latent variance differences, or a combination of both. An empirical example was presented to demonstrate the applicability of MNLFA with the constrained baseline model strategy for referent variable selection. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1056-1078"},"PeriodicalIF":7.8,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92156265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How to synthesize randomized controlled trial data with meta-analytic structural equation modeling: A comparison of various d-to-rpb conversions. 如何用元分析结构方程模型综合随机对照试验数据:各种d-to-rpb转换的比较。
IF 7.8 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-09-29 DOI: 10.1037/met0000790
Hannelies de Jonge, Kees-Jan Kan, Frans J Oort, Suzanne Jak

Meta-analytic structural equation modeling (MASEM) allows a researcher to simultaneously examine multiple relations among variables by fitting a structural equation model to summary statistics from multiple studies. Consider, for example, a mediation model with a predictor (X), mediator (M), and outcome variable (Y). In such a model, X can be a dichotomous variable, allowing researchers to examine the direct and indirect effects of an intervention as in randomized controlled trials (RCTs). However, the natural choice of a meta-analysis of RCTs would involve standardized mean differences as effect sizes, whereas MASEM requires correlation matrices as input. This can be solved by converting standardized mean differences (Cohen's d or Hedges' g) to point-biserial correlations (rpb). Possible conversion formulas vary across publications and conversion tools, and it is unclear which one is most appropriate for use in MASEM. The aim of this article is to describe and evaluate several conversions of standardized mean differences to point-biserial correlations in the context of RCTs. We investigate the impact of the usage of various conversions on MASEM parameter estimation using the R package metaSEM in a simulation study, varying the ratio of group sample sizes, number of primary studies, sample sizes, and missingness. The results show that a relatively unknown d-to-rpb conversion generally performs best. However, this conversion formula is not implemented in the mainstream conversion tools. We developed a user-friendly web application entitled Effect Size Calculator and Converter (https://hdejonge.shinyapps.io/ESCACO) that converts the user's primary study statistics into an effect size suitable for use in MASEM. (PsycInfo Database Record (c) 2026 APA, all rights reserved).

元分析结构方程模型(MASEM)允许研究人员通过拟合结构方程模型来汇总多项研究的统计数据,从而同时检查变量之间的多种关系。例如,考虑一个具有预测因子(X)、中介因子(M)和结果变量(Y)的中介模型。在这样的模型中,X可以是一个二分类变量,允许研究人员检查干预的直接和间接影响,就像随机对照试验(rct)一样。然而,随机对照试验的荟萃分析的自然选择将涉及标准化的平均差异作为效应大小,而MASEM需要相关矩阵作为输入。这可以通过将标准化平均差异(Cohen's d或Hedges' s g)转换为点双列相关性(rpb)来解决。可能的转换公式因出版物和转换工具而异,不清楚哪一种最适合在MASEM中使用。本文的目的是描述和评估在随机对照试验背景下标准化平均差异到点双列相关性的几种转换。我们在模拟研究中研究了使用R包metaSEM对MASEM参数估计的各种转换的影响,改变了组样本量的比例、主要研究的数量、样本量和缺失。结果表明,相对未知的d-to-rpb转换通常表现最好。然而,这种转换公式并没有在主流的转换工具中实现。我们开发了一个用户友好的网络应用程序,名为效应大小计算器和转换器(https://hdejonge.shinyapps.io/ESCACO),将用户的主要研究统计数据转换为适合在MASEM中使用的效应大小。(PsycInfo Database Record (c) 2025 APA,版权所有)。
{"title":"How to synthesize randomized controlled trial data with meta-analytic structural equation modeling: A comparison of various d-to-rpb conversions.","authors":"Hannelies de Jonge, Kees-Jan Kan, Frans J Oort, Suzanne Jak","doi":"10.1037/met0000790","DOIUrl":"10.1037/met0000790","url":null,"abstract":"<p><p>Meta-analytic structural equation modeling (MASEM) allows a researcher to simultaneously examine multiple relations among variables by fitting a structural equation model to summary statistics from multiple studies. Consider, for example, a mediation model with a predictor (<i>X</i>), mediator (<i>M</i>), and outcome variable (<i>Y</i>). In such a model, <i>X</i> can be a dichotomous variable, allowing researchers to examine the direct and indirect effects of an intervention as in randomized controlled trials (RCTs). However, the natural choice of a meta-analysis of RCTs would involve standardized mean differences as effect sizes, whereas MASEM requires correlation matrices as input. This can be solved by converting standardized mean differences (Cohen's <i>d</i> or Hedges' <i>g</i>) to point-biserial correlations (<i>r</i><sub>pb</sub>). Possible conversion formulas vary across publications and conversion tools, and it is unclear which one is most appropriate for use in MASEM. The aim of this article is to describe and evaluate several conversions of standardized mean differences to point-biserial correlations in the context of RCTs. We investigate the impact of the usage of various conversions on MASEM parameter estimation using the R package metaSEM in a simulation study, varying the ratio of group sample sizes, number of primary studies, sample sizes, and missingness. The results show that a relatively unknown <i>d</i>-to-<i>r</i><sub>pb</sub> conversion generally performs best. However, this conversion formula is not implemented in the mainstream conversion tools. We developed a user-friendly web application entitled Effect Size Calculator and Converter (https://hdejonge.shinyapps.io/ESCACO) that converts the user's primary study statistics into an effect size suitable for use in MASEM. (PsycInfo Database Record (c) 2026 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.8,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145192491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Supplemental Material for Inferences and Effect Sizes for Direct, Indirect, and Total Effects in Continuous-Time Mediation Models 连续时间中介模型中直接、间接和总效应的推论和效应大小补充材料
IF 7 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-09-29 DOI: 10.1037/met0000779.supp
{"title":"Supplemental Material for Inferences and Effect Sizes for Direct, Indirect, and Total Effects in Continuous-Time Mediation Models","authors":"","doi":"10.1037/met0000779.supp","DOIUrl":"https://doi.org/10.1037/met0000779.supp","url":null,"abstract":"","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"24 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145254871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Supplemental Material for How to Synthesize Randomized Controlled Trial Data With Meta-Analytic Structural Equation Modeling: A Comparison of Various d-to-rpb Conversions 如何用元分析结构方程模型综合随机对照试验数据:各种d-to-rpb转换的比较
IF 7 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-09-29 DOI: 10.1037/met0000790.supp
{"title":"Supplemental Material for How to Synthesize Randomized Controlled Trial Data With Meta-Analytic Structural Equation Modeling: A Comparison of Various d-to-rpb Conversions","authors":"","doi":"10.1037/met0000790.supp","DOIUrl":"https://doi.org/10.1037/met0000790.supp","url":null,"abstract":"","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"255 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145254883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crowdsourcing multiverse analyses to explore the impact of different data-processing and analysis decisions: A tutorial. 众包多元宇宙分析,探索不同数据处理和分析决策的影响:教程。
IF 7 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-09-18 DOI: 10.1037/met0000770
Tom Heyman,Ekaterina Pronizius,Savannah C Lewis,Oguz A Acar,Matúš Adamkovič,Ettore Ambrosini,Jan Antfolk,Krystian Barzykowski,Ernest Baskin,Carlota Batres,Leanne Boucher,Jordane Boudesseul,Eduard Brandstätter,W Matthew Collins,Dušica Filipović Ðurđević,Ciara Egan,Vanessa Era,Paulo Ferreira,Chiara Fini,Patricia Garrido-Vásquez,Hendrik Godbersen,Pablo Gomez,Aurelien Graton,Necdet Gurkan,Zhiran He,Dave C Johnson,Pavol Kačmár,Chris Koch,Marta Kowal,Tomas Kratochvil,Marco Marelli,Fernando Marmolejo-Ramos,Martín Martínez,Alan Mattiassi,Nicholas P Maxwell,Maria Montefinese,Coby Morvinski,Maital Neta,Yngwie A Nielsen,Sebastian Ocklenburg,Jaš Onič,Marietta Papadatou-Pastou,Adam J Parker,Mariola Paruzel-Czachura,Yuri G Pavlov,Manuel Perea,Gerit Pfuhl,Tanja C Roembke,Jan P Röer,Timo B Roettger,Susana Ruiz-Fernandez,Kathleen Schmidt,Cynthia S Q Siew,Christian K Tamnes,Jack E Taylor,Rémi Thériault,José L Ulloa,Miguel A Vadillo,Michael E W Varnum,Martin R Vasilev,Steven Verheyen,Giada Viviani,Sebastian Wallot,Yuki Yamada,Yueyuan Zheng,Erin M Buchanan
When processing and analyzing empirical data, researchers regularly face choices that may appear arbitrary (e.g., how to define and handle outliers). If one chooses to exclusively focus on a particular option and conduct a single analysis, its outcome might be of limited utility. That is, one remains agnostic regarding the generalizability of the results, because plausible alternative paths remain unexplored. A multiverse analysis offers a solution to this issue by exploring the various choices pertaining to data-processing and/or model building, and examining their impact on the conclusion of a study. However, even though multiverse analyses are arguably less susceptible to biases compared to the typical single-pathway approach, it is still possible to selectively add or omit pathways. To address this issue, we outline a novel, more principled approach to conducting multiverse analyses through crowdsourcing. The approach is detailed in a step-by-step tutorial to facilitate its implementation. We also provide a worked-out illustration featuring the Semantic Priming Across Many Languages project, thereby demonstrating its feasibility and its ability to increase objectivity and transparency. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
在处理和分析经验数据时,研究人员经常面临可能显得武断的选择(例如,如何定义和处理异常值)。如果一个人选择专门关注一个特定的选项,并进行单一的分析,其结果可能是有限的效用。也就是说,对于结果的普遍性,人们仍然是不可知论者,因为合理的替代途径仍然没有被探索。多元宇宙分析通过探索与数据处理和/或模型构建相关的各种选择,并检查它们对研究结论的影响,为这个问题提供了解决方案。然而,尽管与典型的单路径方法相比,多元宇宙分析可以说不太容易受到偏差的影响,但仍然有可能选择性地添加或省略路径。为了解决这个问题,我们概述了一种新颖的、更有原则的方法,通过众包来进行多元宇宙分析。该方法将在一个循序渐进的教程中详细介绍,以促进其实现。我们还提供了一个针对跨多种语言语义启动项目的详细说明,从而展示了其可行性及其增加客观性和透明度的能力。(PsycInfo Database Record (c) 2025 APA,版权所有)。
{"title":"Crowdsourcing multiverse analyses to explore the impact of different data-processing and analysis decisions: A tutorial.","authors":"Tom Heyman,Ekaterina Pronizius,Savannah C Lewis,Oguz A Acar,Matúš Adamkovič,Ettore Ambrosini,Jan Antfolk,Krystian Barzykowski,Ernest Baskin,Carlota Batres,Leanne Boucher,Jordane Boudesseul,Eduard Brandstätter,W Matthew Collins,Dušica Filipović Ðurđević,Ciara Egan,Vanessa Era,Paulo Ferreira,Chiara Fini,Patricia Garrido-Vásquez,Hendrik Godbersen,Pablo Gomez,Aurelien Graton,Necdet Gurkan,Zhiran He,Dave C Johnson,Pavol Kačmár,Chris Koch,Marta Kowal,Tomas Kratochvil,Marco Marelli,Fernando Marmolejo-Ramos,Martín Martínez,Alan Mattiassi,Nicholas P Maxwell,Maria Montefinese,Coby Morvinski,Maital Neta,Yngwie A Nielsen,Sebastian Ocklenburg,Jaš Onič,Marietta Papadatou-Pastou,Adam J Parker,Mariola Paruzel-Czachura,Yuri G Pavlov,Manuel Perea,Gerit Pfuhl,Tanja C Roembke,Jan P Röer,Timo B Roettger,Susana Ruiz-Fernandez,Kathleen Schmidt,Cynthia S Q Siew,Christian K Tamnes,Jack E Taylor,Rémi Thériault,José L Ulloa,Miguel A Vadillo,Michael E W Varnum,Martin R Vasilev,Steven Verheyen,Giada Viviani,Sebastian Wallot,Yuki Yamada,Yueyuan Zheng,Erin M Buchanan","doi":"10.1037/met0000770","DOIUrl":"https://doi.org/10.1037/met0000770","url":null,"abstract":"When processing and analyzing empirical data, researchers regularly face choices that may appear arbitrary (e.g., how to define and handle outliers). If one chooses to exclusively focus on a particular option and conduct a single analysis, its outcome might be of limited utility. That is, one remains agnostic regarding the generalizability of the results, because plausible alternative paths remain unexplored. A multiverse analysis offers a solution to this issue by exploring the various choices pertaining to data-processing and/or model building, and examining their impact on the conclusion of a study. However, even though multiverse analyses are arguably less susceptible to biases compared to the typical single-pathway approach, it is still possible to selectively add or omit pathways. To address this issue, we outline a novel, more principled approach to conducting multiverse analyses through crowdsourcing. The approach is detailed in a step-by-step tutorial to facilitate its implementation. We also provide a worked-out illustration featuring the Semantic Priming Across Many Languages project, thereby demonstrating its feasibility and its ability to increase objectivity and transparency. (PsycInfo Database Record (c) 2025 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"1 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145078142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Psychological methods
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1