首页 > 最新文献

Psychological methods最新文献

英文 中文
Reliability in unidimensional ordinal data: A comparison of continuous and ordinal estimators.
IF 7.6 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-02-10 DOI: 10.1037/met0000739
Eunseong Cho, Sébastien Béland

This study challenges three common methodological beliefs and practices. The first question examines whether ordinal reliability estimators are more accurate than continuous estimators for unidimensional data with uncorrelated errors. Continuous estimators (e.g., coefficient alpha) can be applied to both continuous and ordinal data, while ordinal estimators (e.g., ordinal alpha and categorical omega) are specific to ordinal data. Although ordinal estimators are often argued to have conceptual advantages, comprehensive investigations into their accuracy are limited. The second question explores the relationship between skewness and kurtosis in ordinal data. Previous simulation studies have primarily examined cases where skewness and kurtosis change in the same direction, leaving gaps in understanding their independent effects. The third question addresses item response theory (IRT) models: Should the scaling constant always be fixed at the same value (e.g., 1.7)? To answer these questions, this study conducted a Monte Carlo simulation comparing four continuous estimators and eight ordinal estimators. The results indicated that most estimators achieved acceptable levels of accuracy. On average, ordinal estimators were slightly less accurate than continuous estimators, though the difference was smaller than what most users would consider practically significant (e.g., less than 0.01). However, ordinal alpha stood out as a notable exception, severely overestimating reliability across various conditions. Regarding the scaling constant in IRT models, the results indicated that its optimal value varied depending on the data type (e.g., dichotomous vs. polytomous). In some cases, values below 1.7 were optimal, while in others, values above 1.8 were optimal. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

{"title":"Reliability in unidimensional ordinal data: A comparison of continuous and ordinal estimators.","authors":"Eunseong Cho, Sébastien Béland","doi":"10.1037/met0000739","DOIUrl":"https://doi.org/10.1037/met0000739","url":null,"abstract":"<p><p>This study challenges three common methodological beliefs and practices. The first question examines whether ordinal reliability estimators are more accurate than continuous estimators for unidimensional data with uncorrelated errors. Continuous estimators (e.g., coefficient alpha) can be applied to both continuous and ordinal data, while ordinal estimators (e.g., ordinal alpha and categorical omega) are specific to ordinal data. Although ordinal estimators are often argued to have conceptual advantages, comprehensive investigations into their accuracy are limited. The second question explores the relationship between skewness and kurtosis in ordinal data. Previous simulation studies have primarily examined cases where skewness and kurtosis change in the same direction, leaving gaps in understanding their independent effects. The third question addresses item response theory (IRT) models: Should the scaling constant always be fixed at the same value (e.g., 1.7)? To answer these questions, this study conducted a Monte Carlo simulation comparing four continuous estimators and eight ordinal estimators. The results indicated that most estimators achieved acceptable levels of accuracy. On average, ordinal estimators were slightly less accurate than continuous estimators, though the difference was smaller than what most users would consider practically significant (e.g., less than 0.01). However, ordinal alpha stood out as a notable exception, severely overestimating reliability across various conditions. Regarding the scaling constant in IRT models, the results indicated that its optimal value varied depending on the data type (e.g., dichotomous vs. polytomous). In some cases, values below 1.7 were optimal, while in others, values above 1.8 were optimal. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143391582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The relationship between the phi coefficient and the unidimensionality index H: Improving psychological scaling from the ground up.
IF 7.6 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-02-10 DOI: 10.1037/met0000736
Johannes Titz

To study the dimensional structure of psychological phenomena, a precise definition of unidimensionality is essential. Most definitions of unidimensionality rely on factor analysis. However, the reliability of factor analysis depends on the input data, which primarily consists of Pearson correlations. A significant issue with Pearson correlations is that they are almost guaranteed to underestimate unidimensionality, rendering them unsuitable for evaluating the unidimensionality of a scale. This article formally demonstrates that the simple unidimensionality index H is always at least as high as, or higher than, the Pearson correlation for dichotomous and polytomous items (φ). Leveraging this inequality, a case is presented where five dichotomous items are perfectly unidimensional, yet factor analysis based on φ incorrectly suggests a two-dimensional solution. To illustrate that this issue extends beyond theoretical scenarios, an analysis of real data from a statistics exam (N = 133) is conducted, revealing the same problem. An in-depth analysis of the exam data shows that violations of unidimensionality are systematic and should not be dismissed as mere noise. Inconsistent answering patterns can indicate whether a participant blundered, cheated, or has conceptual misunderstandings, information typically overlooked by traditional scaling procedures based on correlations. The conclusion is that psychologists should consider unidimensionality not as a peripheral concern but as the foundation for any serious scaling attempt. The index H could play a crucial role in establishing this foundation. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

{"title":"The relationship between the phi coefficient and the unidimensionality index H: Improving psychological scaling from the ground up.","authors":"Johannes Titz","doi":"10.1037/met0000736","DOIUrl":"https://doi.org/10.1037/met0000736","url":null,"abstract":"<p><p>To study the dimensional structure of psychological phenomena, a precise definition of unidimensionality is essential. Most definitions of unidimensionality rely on factor analysis. However, the reliability of factor analysis depends on the input data, which primarily consists of Pearson correlations. A significant issue with Pearson correlations is that they are almost guaranteed to underestimate unidimensionality, rendering them unsuitable for evaluating the unidimensionality of a scale. This article formally demonstrates that the simple unidimensionality index <i>H</i> is always at least as high as, or higher than, the Pearson correlation for dichotomous and polytomous items (φ). Leveraging this inequality, a case is presented where five dichotomous items are perfectly unidimensional, yet factor analysis based on φ incorrectly suggests a two-dimensional solution. To illustrate that this issue extends beyond theoretical scenarios, an analysis of real data from a statistics exam (<i>N</i> = 133) is conducted, revealing the same problem. An in-depth analysis of the exam data shows that violations of unidimensionality are systematic and should not be dismissed as mere noise. Inconsistent answering patterns can indicate whether a participant blundered, cheated, or has conceptual misunderstandings, information typically overlooked by traditional scaling procedures based on correlations. The conclusion is that psychologists should consider unidimensionality not as a peripheral concern but as the foundation for any serious scaling attempt. The index <i>H</i> could play a crucial role in establishing this foundation. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143391502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reassessing the fitting propensity of factor models.
IF 7.6 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-02-10 DOI: 10.1037/met0000735
Wes Bonifay, Li Cai, Carl F Falk, Kristopher J Preacher

Model complexity is a critical consideration when evaluating a statistical model. To quantify complexity, one can examine fitting propensity (FP), or the ability of the model to fit well to diverse patterns of data. The scant foundational research on FP has focused primarily on proof of concept rather than practical application. To address this oversight, the present work joins a recently published study in examining the FP of models that are commonly applied in factor analysis. We begin with a historical account of statistical model evaluation, which refutes the notion that complexity can be fully understood by counting the number of free parameters in the model. We then present three sets of analytic examples to better understand the FP of exploratory and confirmatory factor analysis models that are widely used in applied research. We characterize our findings relative to previously disseminated claims about factor model FP. Finally, we provide some recommendations for future research on FP in latent variable modeling. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

{"title":"Reassessing the fitting propensity of factor models.","authors":"Wes Bonifay, Li Cai, Carl F Falk, Kristopher J Preacher","doi":"10.1037/met0000735","DOIUrl":"https://doi.org/10.1037/met0000735","url":null,"abstract":"<p><p>Model complexity is a critical consideration when evaluating a statistical model. To quantify complexity, one can examine fitting propensity (FP), or the ability of the model to fit well to diverse patterns of data. The scant foundational research on FP has focused primarily on proof of concept rather than practical application. To address this oversight, the present work joins a recently published study in examining the FP of models that are commonly applied in factor analysis. We begin with a historical account of statistical model evaluation, which refutes the notion that complexity can be fully understood by counting the number of free parameters in the model. We then present three sets of analytic examples to better understand the FP of exploratory and confirmatory factor analysis models that are widely used in applied research. We characterize our findings relative to previously disseminated claims about factor model FP. Finally, we provide some recommendations for future research on FP in latent variable modeling. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143391579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel approach to estimate moderated treatment effects and moderated mediated effects with continuous moderators. 一种新的方法来估计调节治疗效果和持续调节因子的调节介导效果。
IF 7.6 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-02-01 Epub Date: 2023-06-12 DOI: 10.1037/met0000593
Matthew J Valente, Judith J M Rijnhart, Oscar Gonzalez

Moderation analysis is used to study under what conditions or for which subgroups of individuals a treatment effect is stronger or weaker. When a moderator variable is categorical, such as assigned sex, treatment effects can be estimated for each group resulting in a treatment effect for males and a treatment effect for females. If a moderator variable is a continuous variable, a strategy for investigating moderated treatment effects is to estimate conditional effects (i.e., simple slopes) via the pick-a-point approach. When conditional effects are estimated using the pick-a-point approach, the conditional effects are often given the interpretation of "the treatment effect for the subgroup of individuals…." However, the interpretation of these conditional effects as subgroup effects is potentially misleading because conditional effects are interpreted at a specific value of the moderator variable (e.g., +1 SD above the mean). We describe a simple solution that resolves this problem using a simulation-based approach. We describe how to apply this simulation-based approach to estimate subgroup effects by defining subgroups using a range of scores on the continuous moderator variable. We apply this method to three empirical examples to demonstrate how to estimate subgroup effects for moderated treatment and moderated mediated effects when the moderator variable is a continuous variable. Finally, we provide researchers with both SAS and R code to implement this method for similar situations described in this paper. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

适度分析用于研究在什么条件下或对哪些亚组的个体,治疗效果更强或较弱。当调节变量是分类的,例如指定的性别时,可以估计每组的治疗效果,从而产生对男性的治疗效果和对女性的治疗效果。如果调节变量是连续变量,则研究调节治疗效果的策略是通过选择点法估计条件效果(即简单斜率)。当使用定点法估计条件效应时,条件效应通常被解释为“个体亚组的治疗效果……”。然而,将这些条件效应解释为亚组效应可能具有误导性,因为条件效应是在调节变量的特定值(例如高于平均值+1SD)下解释的。我们描述了一个简单的解决方案,使用基于模拟的方法来解决这个问题。我们描述了如何应用这种基于模拟的方法,通过使用连续调节变量的一系列分数来定义亚组,从而估计亚组效应。我们将该方法应用于三个经验例子,以证明当调节变量是连续变量时,如何估计调节治疗和调节介导效应的亚组效应。最后,我们为研究人员提供了SAS和R代码,以针对本文中描述的类似情况实现该方法。(PsycInfo数据库记录(c)2023 APA,保留所有权利)。
{"title":"A novel approach to estimate moderated treatment effects and moderated mediated effects with continuous moderators.","authors":"Matthew J Valente, Judith J M Rijnhart, Oscar Gonzalez","doi":"10.1037/met0000593","DOIUrl":"10.1037/met0000593","url":null,"abstract":"<p><p>Moderation analysis is used to study under what conditions or for which subgroups of individuals a treatment effect is stronger or weaker. When a moderator variable is categorical, such as assigned sex, treatment effects can be estimated for each group resulting in a treatment effect for males and a treatment effect for females. If a moderator variable is a continuous variable, a strategy for investigating moderated treatment effects is to estimate conditional effects (i.e., simple slopes) via the pick-a-point approach. When conditional effects are estimated using the pick-a-point approach, the conditional effects are often given the interpretation of \"the treatment effect for the subgroup of individuals….\" However, the interpretation of these conditional effects as <i>subgroup</i> effects is potentially misleading because conditional effects are interpreted at a specific value of the moderator variable (e.g., +1 <i>SD</i> above the mean). We describe a simple solution that resolves this problem using a simulation-based approach. We describe how to apply this simulation-based approach to estimate subgroup effects by defining subgroups using a <i>range of scores</i> on the continuous moderator variable. We apply this method to three empirical examples to demonstrate how to estimate subgroup effects for moderated treatment and moderated mediated effects when the moderator variable is a continuous variable. Finally, we provide researchers with both SAS and R code to implement this method for similar situations described in this paper. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1-15"},"PeriodicalIF":7.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10713862/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9620515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Troubleshooting Bayesian cognitive models. 贝叶斯认知模型故障排除。
IF 7.6 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-02-01 Epub Date: 2023-03-27 DOI: 10.1037/met0000554
Beth Baribault, Anne G E Collins

Using Bayesian methods to apply computational models of cognitive processes, or Bayesian cognitive modeling, is an important new trend in psychological research. The rise of Bayesian cognitive modeling has been accelerated by the introduction of software that efficiently automates the Markov chain Monte Carlo sampling used for Bayesian model fitting-including the popular Stan and PyMC packages, which automate the dynamic Hamiltonian Monte Carlo and No-U-Turn Sampler (HMC/NUTS) algorithms that we spotlight here. Unfortunately, Bayesian cognitive models can struggle to pass the growing number of diagnostic checks required of Bayesian models. If any failures are left undetected, inferences about cognition based on the model's output may be biased or incorrect. As such, Bayesian cognitive models almost always require troubleshooting before being used for inference. Here, we present a deep treatment of the diagnostic checks and procedures that are critical for effective troubleshooting, but are often left underspecified by tutorial papers. After a conceptual introduction to Bayesian cognitive modeling and HMC/NUTS sampling, we outline the diagnostic metrics, procedures, and plots necessary to detect problems in model output with an emphasis on how these requirements have recently been changed and extended. Throughout, we explain how uncovering the exact nature of the problem is often the key to identifying solutions. We also demonstrate the troubleshooting process for an example hierarchical Bayesian model of reinforcement learning, including supplementary code. With this comprehensive guide to techniques for detecting, identifying, and overcoming problems in fitting Bayesian cognitive models, psychologists across subfields can more confidently build and use Bayesian cognitive models in their research. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

使用贝叶斯方法应用认知过程的计算模型,或贝叶斯认知建模,是心理学研究的一个重要新趋势。贝叶斯认知建模的兴起是由于引入了有效地自动化用于贝叶斯模型拟合的马尔可夫链蒙特卡罗采样的软件而加速的,包括流行的Stan和PyMC包,它们自动化了我们在这里关注的动态Hamiltonian蒙特卡罗和No-U-Turn采样器(HMC/NUTS)算法。不幸的是,贝叶斯认知模型可能很难通过贝叶斯模型所需的越来越多的诊断检查。如果任何失败都没有被发现,那么基于模型输出的关于认知的推断可能是有偏见或不正确的。因此,贝叶斯认知模型在用于推理之前几乎总是需要进行故障排除。在这里,我们对诊断检查和程序进行了深入的处理,这些检查和程序对有效的故障排除至关重要,但教程论文中往往没有详细说明。在对贝叶斯认知建模和HMC/NUTS采样进行概念介绍后,我们概述了检测模型输出中问题所需的诊断指标、程序和图,并强调了这些需求最近是如何更改和扩展的。在整个过程中,我们解释了揭示问题的确切性质通常是确定解决方案的关键。我们还展示了强化学习的分层贝叶斯模型的故障排除过程,包括补充代码。有了这本关于检测、识别和克服贝叶斯认知模型拟合问题的技术的全面指南,各子领域的心理学家可以更自信地在研究中建立和使用贝叶斯认知模型。(PsycInfo数据库记录(c)2023 APA,保留所有权利)。
{"title":"Troubleshooting Bayesian cognitive models.","authors":"Beth Baribault, Anne G E Collins","doi":"10.1037/met0000554","DOIUrl":"10.1037/met0000554","url":null,"abstract":"<p><p>Using Bayesian methods to apply computational models of cognitive processes, or <i>Bayesian cognitive modeling</i>, is an important new trend in psychological research. The rise of Bayesian cognitive modeling has been accelerated by the introduction of software that efficiently automates the Markov chain Monte Carlo sampling used for Bayesian model fitting-including the popular Stan and PyMC packages, which automate the dynamic Hamiltonian Monte Carlo and No-U-Turn Sampler (HMC/NUTS) algorithms that we spotlight here. Unfortunately, Bayesian cognitive models can struggle to pass the growing number of diagnostic checks required of Bayesian models. If any failures are left undetected, inferences about cognition based on the model's output may be biased or incorrect. As such, Bayesian cognitive models almost always require <i>troubleshooting</i> before being used for inference. Here, we present a deep treatment of the diagnostic checks and procedures that are critical for effective troubleshooting, but are often left underspecified by tutorial papers. After a conceptual introduction to Bayesian cognitive modeling and HMC/NUTS sampling, we outline the diagnostic metrics, procedures, and plots necessary to detect problems in model output with an emphasis on how these requirements have recently been changed and extended. Throughout, we explain how uncovering the exact nature of the problem is often the key to identifying solutions. We also demonstrate the troubleshooting process for an example hierarchical Bayesian model of reinforcement learning, including supplementary code. With this comprehensive guide to techniques for detecting, identifying, and overcoming problems in fitting Bayesian cognitive models, psychologists across subfields can more confidently build and use Bayesian cognitive models in their research. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"128-154"},"PeriodicalIF":7.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10522800/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9188270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is exploratory factor analysis always to be preferred? A systematic comparison of factor analytic techniques throughout the confirmatory-exploratory continuum. 探索性因素分析总是首选吗?在整个确认-探索连续体中对因素分析技术进行系统比较。
IF 7.6 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-02-01 Epub Date: 2023-05-25 DOI: 10.1037/met0000579
Pablo Nájera, Francisco J Abad, Miguel A Sorrel

The number of available factor analytic techniques has been increasing in the last decades. However, the lack of clear guidelines and exhaustive comparison studies between the techniques might hinder that these valuable methodological advances make their way to applied research. The present paper evaluates the performance of confirmatory factor analysis (CFA), CFA with sequential model modification using modification indices and the Saris procedure, exploratory factor analysis (EFA) with different rotation procedures (Geomin, target, and objectively refined target matrix), Bayesian structural equation modeling (BSEM), and a new set of procedures that, after fitting an unrestrictive model (i.e., EFA, BSEM), identify and retain only the relevant loadings to provide a parsimonious CFA solution (ECFA, BCFA). By means of an exhaustive Monte Carlo simulation study and a real data illustration, it is shown that CFA and BSEM are overly stiff and, consequently, do not appropriately recover the structure of slightly misspecified models. EFA usually provides the most accurate parameter estimates, although the rotation procedure choice is of major importance, especially depending on whether the latent factors are correlated or not. Finally, ECFA might be a sound option whenever an a priori structure cannot be hypothesized and the latent factors are correlated. Moreover, it is shown that the pattern of the results of a factor analytic technique can be somehow predicted based on its positioning in the confirmatory-exploratory continuum. Applied recommendations are given for the selection of the most appropriate technique under different representative scenarios by means of a detailed flowchart. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

在过去的几十年里,可用的因子分析技术的数量一直在增加。然而,缺乏明确的指导方针和对这些技术进行详尽的比较研究,可能会阻碍这些有价值的方法进展进入应用研究。本文评价了验证性因子分析(CFA)、使用修正指标和Saris程序进行序列模型修正的验证性因子分析(CFA)、不同旋转程序(Geomin、目标和客观精炼的目标矩阵)的探索性因子分析(EFA)、贝叶斯结构方程建模(BSEM)以及拟合无约束模型后的一组新程序(即EFA、BSEM)的性能。识别并仅保留相关的加载,以提供简洁的CFA解决方案(ECFA、BCFA)。通过详尽的蒙特卡罗模拟研究和实际数据说明,表明CFA和BSEM过于僵硬,因此不能适当地恢复稍微错误指定的模型的结构。EFA通常提供最准确的参数估计,尽管轮换程序的选择非常重要,特别是取决于潜在因素是否相关。最后,当先验结构无法假设且潜在因素相关时,ECFA可能是一个合理的选择。此外,还表明,因子分析技术的结果模式可以基于其在确认-探索连续体中的定位进行某种程度的预测。通过详细的流程图,给出了在不同代表性场景下选择最合适技术的应用建议。(PsycInfo数据库记录(c) 2023 APA,版权所有)。
{"title":"Is exploratory factor analysis always to be preferred? A systematic comparison of factor analytic techniques throughout the confirmatory-exploratory continuum.","authors":"Pablo Nájera, Francisco J Abad, Miguel A Sorrel","doi":"10.1037/met0000579","DOIUrl":"10.1037/met0000579","url":null,"abstract":"<p><p>The number of available factor analytic techniques has been increasing in the last decades. However, the lack of clear guidelines and exhaustive comparison studies between the techniques might hinder that these valuable methodological advances make their way to applied research. The present paper evaluates the performance of confirmatory factor analysis (CFA), CFA with sequential model modification using modification indices and the Saris procedure, exploratory factor analysis (EFA) with different rotation procedures (Geomin, target, and objectively refined target matrix), Bayesian structural equation modeling (BSEM), and a new set of procedures that, after fitting an unrestrictive model (i.e., EFA, BSEM), identify and retain only the relevant loadings to provide a parsimonious CFA solution (ECFA, BCFA). By means of an exhaustive Monte Carlo simulation study and a real data illustration, it is shown that CFA and BSEM are overly stiff and, consequently, do not appropriately recover the structure of slightly misspecified models. EFA usually provides the most accurate parameter estimates, although the rotation procedure choice is of major importance, especially depending on whether the latent factors are correlated or not. Finally, ECFA might be a sound option whenever an a priori structure cannot be hypothesized and the latent factors are correlated. Moreover, it is shown that the pattern of the results of a factor analytic technique can be somehow predicted based on its positioning in the confirmatory-exploratory continuum. Applied recommendations are given for the selection of the most appropriate technique under different representative scenarios by means of a detailed flowchart. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"16-39"},"PeriodicalIF":7.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9876148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inference with cross-lagged effects-Problems in time. 交叉滞后效应推断--时间问题。
IF 7.6 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-02-01 Epub Date: 2024-07-18 DOI: 10.1037/met0000665
Charles C Driver

The interpretation of cross-effects from vector autoregressive models to infer structure and causality among constructs is widespread and sometimes problematic. I describe problems in the interpretation of cross-effects when processes that are thought to fluctuate continuously in time are, as is typically done, modeled as changing only in discrete steps (as in e.g., structural equation modeling)-zeroes in a discrete-time temporal matrix do not necessarily correspond to zero effects in the underlying continuous processes, and vice versa. This has implications for the common case when the presence or absence of cross-effects is used for inference about underlying causal processes. I demonstrate these problems via simulation, and also show that when an underlying set of processes are continuous in time, even relatively few direct causal links can result in much denser temporal effect matrices in discrete-time. I demonstrate one solution to these issues, namely parameterizing the system as a stochastic differential equation and focusing inference on the continuous-time temporal effects. I follow this with some discussion of issues regarding the switch to continuous-time, specifically regularization, appropriate measurement time lag, and model order. An empirical example using intensive longitudinal data highlights some of the complexities of applying such approaches to real data, particularly with respect to model specification, examining misspecification, and parameter interpretation. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

从向量自回归模型中解释交叉效应来推断结构和构造之间的因果关系是很普遍的,有时也会出现问题。我将描述当被认为在时间上连续波动的过程被建模为仅在离散步骤中变化(如结构方程建模)时,交叉效应解释中存在的问题--离散时间时间矩阵中的零效应并不一定对应于基本连续过程中的零效应,反之亦然。这对利用交叉效应的存在或不存在来推断基本因果过程的常见情况有影响。我通过模拟演示了这些问题,并表明当一组基础过程在时间上是连续的,即使相对较少的直接因果联系也会导致离散时间中更密集的时间效应矩阵。我演示了解决这些问题的一种方法,即把系统参数化为随机微分方程,并把推论重点放在连续时间的时间效应上。接下来,我将讨论有关转换到连续时间的问题,特别是正则化、适当的测量时滞和模型阶数。一个使用密集纵向数据的实证例子突出说明了将这种方法应用于真实数据的一些复杂性,特别是在模型规范、检查错误规范和参数解释方面。(PsycInfo Database Record (c) 2024 APA, 版权所有)。
{"title":"Inference with cross-lagged effects-Problems in time.","authors":"Charles C Driver","doi":"10.1037/met0000665","DOIUrl":"10.1037/met0000665","url":null,"abstract":"<p><p>The interpretation of cross-effects from vector autoregressive models to infer structure and causality among constructs is widespread and sometimes problematic. I describe problems in the interpretation of cross-effects when processes that are thought to fluctuate continuously in time are, as is typically done, modeled as changing only in discrete steps (as in e.g., structural equation modeling)-zeroes in a discrete-time temporal matrix do not necessarily correspond to zero effects in the underlying continuous processes, and vice versa. This has implications for the common case when the presence or absence of cross-effects is used for inference about underlying causal processes. I demonstrate these problems via simulation, and also show that when an underlying set of processes are continuous in time, even relatively few direct causal links can result in much denser temporal effect matrices in discrete-time. I demonstrate one solution to these issues, namely parameterizing the system as a stochastic differential equation and focusing inference on the continuous-time temporal effects. I follow this with some discussion of issues regarding the switch to continuous-time, specifically regularization, appropriate measurement time lag, and model order. An empirical example using intensive longitudinal data highlights some of the complexities of applying such approaches to real data, particularly with respect to model specification, examining misspecification, and parameter interpretation. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"174-202"},"PeriodicalIF":7.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141634308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Everything has its price: Foundations of cost-sensitive machine learning and its application in psychology. 凡事皆有代价:成本敏感型机器学习的基础及其在心理学中的应用。
IF 7.6 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-02-01 Epub Date: 2023-08-10 DOI: 10.1037/met0000586
Philipp Sterner, David Goretzko, Florian Pargent

Psychology has seen an increase in the use of machine learning (ML) methods. In many applications, observations are classified into one of two groups (binary classification). Off-the-shelf classification algorithms assume that the costs of a misclassification (false positive or false negative) are equal. Because this is often not reasonable (e.g., in clinical psychology), cost-sensitive machine learning (CSL) methods can take different cost ratios into account. We present the mathematical foundations and introduce a taxonomy of the most commonly used CSL methods, before demonstrating their application and usefulness on psychological data, that is, the drug consumption data set (N = 1, 885) from the University of California Irvine ML Repository. In our example, all demonstrated CSL methods noticeably reduced mean misclassification costs compared to regular ML algorithms. We discuss the necessity for researchers to perform small benchmarks of CSL methods for their own practical application. Thus, our open materials provide R code, demonstrating how CSL methods can be applied within the mlr3 framework (https://osf.io/cvks7/). (PsycInfo Database Record (c) 2025 APA, all rights reserved).

在心理学中,机器学习(ML)方法的使用有所增加。在许多应用中,观测值被分为两组(二元分类)之一。现成的分类算法假设错误分类(假阳性或假阴性)的代价是相等的。因为这通常是不合理的(例如,在临床心理学中),成本敏感机器学习(CSL)方法可以考虑不同的成本比率。我们介绍了数学基础,并介绍了最常用的CSL方法的分类,然后展示了它们在心理数据上的应用和有用性,即来自加州大学欧文分校ML存储库的药物消费数据集(N = 1,885)。在我们的示例中,与常规ML算法相比,所有演示的CSL方法都显著降低了平均误分类成本。我们讨论了研究人员为了自己的实际应用而对CSL方法进行小型基准测试的必要性。因此,我们的开放材料提供了R代码,演示了如何在mlr3框架中应用CSL方法(https://osf.io/cvks7/)。(PsycInfo数据库记录(c) 2023 APA,版权所有)。
{"title":"Everything has its price: Foundations of cost-sensitive machine learning and its application in psychology.","authors":"Philipp Sterner, David Goretzko, Florian Pargent","doi":"10.1037/met0000586","DOIUrl":"10.1037/met0000586","url":null,"abstract":"<p><p>Psychology has seen an increase in the use of machine learning (ML) methods. In many applications, observations are classified into one of two groups (binary classification). Off-the-shelf classification algorithms assume that the costs of a misclassification (false positive or false negative) are equal. Because this is often not reasonable (e.g., in clinical psychology), cost-sensitive machine learning (CSL) methods can take different cost ratios into account. We present the mathematical foundations and introduce a taxonomy of the most commonly used CSL methods, before demonstrating their application and usefulness on psychological data, that is, the drug consumption data set (<i>N</i> = 1, 885) from the University of California Irvine ML Repository. In our example, all demonstrated CSL methods noticeably reduced mean misclassification costs compared to regular ML algorithms. We discuss the necessity for researchers to perform small benchmarks of CSL methods for their own practical application. Thus, our open materials provide R code, demonstrating how CSL methods can be applied within the mlr3 framework (https://osf.io/cvks7/). (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"112-127"},"PeriodicalIF":7.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9967423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A primer on synthesizing individual participant data obtained from complex sampling surveys: A two-stage IPD meta-analysis approach. 综合从复杂抽样调查中获得的个体参与者数据的入门指南:两阶段 IPD 元分析方法。
IF 7.6 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-02-01 Epub Date: 2023-01-09 DOI: 10.1037/met0000539
Diego G Campos, Mike W-L Cheung, Ronny Scherer

The increasing availability of individual participant data (IPD) in the social sciences offers new possibilities to synthesize research evidence across primary studies. Two-stage IPD meta-analysis represents a framework that can utilize these possibilities. While most of the methodological research on two-stage IPD meta-analysis focused on its performance compared with other approaches, dealing with the complexities of the primary and meta-analytic data has received little attention, particularly when IPD are drawn from complex sampling surveys. Complex sampling surveys often feature clustering, stratification, and multistage sampling to obtain nationally or internationally representative data from a target population. Furthermore, IPD from these studies is likely to provide more than one effect size. To address these complexities, we propose a two-stage meta-analytic approach that generates model-based effect sizes in Stage 1 and synthesizes them in Stage 2. We present a sequence of steps, illustrate their implementation, and discuss the methodological decisions and options within. Given its flexibility to deal with the complex nature of the primary and meta-analytic data and its ability to combine multiple IPD sets or IPD with aggregated data, the proposed two-stage approach opens up new analytic possibilities for synthesizing knowledge from complex sampling surveys. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

在社会科学领域,越来越多的个体参与者数据(IPD)为综合各主要研究的研究证据提供了新的可能性。两阶段 IPD 荟萃分析是一种可以利用这些可能性的框架。关于两阶段 IPD 元分析的方法论研究大多集中在其与其他方法相比的性能上,而处理原始数据和元分析数据的复杂性却很少受到关注,尤其是当 IPD 来自复杂的抽样调查时。复杂抽样调查通常以聚类、分层和多阶段抽样为特征,从目标人群中获取具有全国或国际代表性的数据。此外,来自这些研究的 IPD 很可能提供不止一个效应大小。为了解决这些复杂问题,我们提出了一种两阶段元分析方法,即在第一阶段生成基于模型的效应大小,并在第二阶段对其进行综合。我们提出了一系列步骤,说明了这些步骤的实施,并讨论了其中的方法决策和选择。鉴于该方法能灵活处理原始数据和元分析数据的复杂性,并能将多个 IPD 集或 IPD 与汇总数据结合起来,因此建议的两阶段方法为综合复杂抽样调查的知识开辟了新的分析可能性。(PsycInfo Database Record (c) 2023 APA, 版权所有)。
{"title":"A primer on synthesizing individual participant data obtained from complex sampling surveys: A two-stage IPD meta-analysis approach.","authors":"Diego G Campos, Mike W-L Cheung, Ronny Scherer","doi":"10.1037/met0000539","DOIUrl":"10.1037/met0000539","url":null,"abstract":"<p><p>The increasing availability of individual participant data (IPD) in the social sciences offers new possibilities to synthesize research evidence across primary studies. Two-stage IPD meta-analysis represents a framework that can utilize these possibilities. While most of the methodological research on two-stage IPD meta-analysis focused on its performance compared with other approaches, dealing with the complexities of the primary and meta-analytic data has received little attention, particularly when IPD are drawn from complex sampling surveys. Complex sampling surveys often feature clustering, stratification, and multistage sampling to obtain nationally or internationally representative data from a target population. Furthermore, IPD from these studies is likely to provide more than one effect size. To address these complexities, we propose a two-stage meta-analytic approach that generates model-based effect sizes in Stage 1 and synthesizes them in Stage 2. We present a sequence of steps, illustrate their implementation, and discuss the methodological decisions and options within. Given its flexibility to deal with the complex nature of the primary and meta-analytic data and its ability to combine multiple IPD sets or IPD with aggregated data, the proposed two-stage approach opens up new analytic possibilities for synthesizing knowledge from complex sampling surveys. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"83-111"},"PeriodicalIF":7.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10501727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Linear mixed models and latent growth curve models for group comparison studies contaminated by outliers. 线性混合模型和潜在增长曲线模型,用于受异常值污染的分组比较研究。
IF 7.6 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2025-02-01 Epub Date: 2024-02-15 DOI: 10.1037/met0000643
Fabio Mason, Eva Cantoni, Paolo Ghisletta

The linear mixed model (LMM) and latent growth model (LGM) are frequently applied to within-subject two-group comparison studies to investigate group differences in the time effect, supposedly due to differential group treatments. Yet, research about LMM and LGM in the presence of outliers (defined as observations with a very low probability of occurrence if assumed from a given distribution) is scarce. Moreover, when such research exists, it focuses on estimation properties (bias and efficiency), neglecting inferential characteristics (e.g., power and type-I error). We study power and type-I error rates of Wald-type and bootstrap confidence intervals (CIs), as well as coverage and length of CIs and mean absolute error (MAE) of estimates, associated with classical and robust estimations of LMM and LGM, applied to a within-subject two-group comparison design. We conduct a Monte Carlo simulation experiment to compare CIs and MAEs under different conditions: data (a) without contamination, (b) contaminated by within-subject outliers, (c) contaminated by between-subject outliers, and (d) both contaminated by within- and between-subject outliers. Results show that without contamination, methods perform similarly, except CIs based on S, a robust LMM estimator, which are slightly less close to nominal values in their coverage. However, in the presence of both within- and between-subject outliers, CIs based on robust estimators, especially S, performed better than those of classical methods. In particular, the percentile CI with the wild bootstrap applied to the robust LMM estimators outperformed all other methods, especially with between-subject outliers, when we found the classical Wald-type CI based on the t statistic with Satterthwaite approximation for LMM to be highly misleading. We provide R code to compute all methods presented here. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

线性混合模型(LMM)和潜在增长模型(LGM)经常被应用于受试者内两组比较研究,以调查时间效应的组间差异,这可能是由于不同的组间处理造成的。然而,关于 LMM 和 LGM 在存在异常值(定义为假定为给定分布时出现概率极低的观测值)的情况下的研究却很少。此外,即使有此类研究,也主要集中在估计特性(偏差和效率)上,而忽略了推论特性(如功率和类型一误差)。我们研究了 Wald 型和 bootstrap 置信区间 (CI) 的功率和 I 型误差率,以及 CI 的覆盖范围和长度和估计值的平均绝对误差 (MAE),这些都与 LMM 和 LGM 的经典和稳健估计有关,并应用于受试者内两组比较设计。我们进行了蒙特卡罗模拟实验,以比较不同条件下的 CI 和 MAE:数据 (a) 无污染,(b) 受研究对象内异常值污染,(c) 受研究对象间异常值污染,(d) 同时受研究对象内和研究对象间异常值污染。结果表明,在没有污染的情况下,除了基于 S(一种稳健的 LMM 估计器)的 CI 值在覆盖范围上略微偏离名义值之外,其他方法的表现类似。然而,在存在受试者内和受试者间异常值的情况下,基于稳健估计器(尤其是 S)的 CI 比传统方法的 CI 表现更好。特别是,当我们发现基于 t 统计量和 Satterthwaite 近似 LMM 的经典 Wald 型 CI 极易误导时,应用于稳健 LMM 估计器的野生自举法百分位数 CI 的表现优于所有其他方法,尤其是在存在研究对象间异常值的情况下。我们提供了 R 代码来计算本文介绍的所有方法。(PsycInfo 数据库记录 (c) 2024 APA,保留所有权利)。
{"title":"Linear mixed models and latent growth curve models for group comparison studies contaminated by outliers.","authors":"Fabio Mason, Eva Cantoni, Paolo Ghisletta","doi":"10.1037/met0000643","DOIUrl":"10.1037/met0000643","url":null,"abstract":"<p><p>The linear mixed model (LMM) and latent growth model (LGM) are frequently applied to within-subject two-group comparison studies to investigate group differences in the time effect, supposedly due to differential group treatments. Yet, research about LMM and LGM in the presence of outliers (defined as observations with a very low probability of occurrence if assumed from a given distribution) is scarce. Moreover, when such research exists, it focuses on estimation properties (bias and efficiency), neglecting inferential characteristics (e.g., power and type-I error). We study power and type-I error rates of Wald-type and bootstrap confidence intervals (CIs), as well as coverage and length of CIs and mean absolute error (MAE) of estimates, associated with classical and robust estimations of LMM and LGM, applied to a within-subject two-group comparison design. We conduct a Monte Carlo simulation experiment to compare CIs and MAEs under different conditions: data (a) without contamination, (b) contaminated by within-subject outliers, (c) contaminated by between-subject outliers, and (d) both contaminated by within- and between-subject outliers. Results show that without contamination, methods perform similarly, except CIs based on S, a robust LMM estimator, which are slightly less close to nominal values in their coverage. However, in the presence of both within- and between-subject outliers, CIs based on robust estimators, especially S, performed better than those of classical methods. In particular, the percentile CI with the wild bootstrap applied to the robust LMM estimators outperformed all other methods, especially with between-subject outliers, when we found the classical Wald-type CI based on the t statistic with Satterthwaite approximation for LMM to be highly misleading. We provide R code to compute all methods presented here. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"155-173"},"PeriodicalIF":7.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139735975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Psychological methods
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1