Pub Date : 2023-03-06DOI: 10.1177/10944281231155771
Shea Fyffe, Philseok Lee, Seth A. Kaplan
Natural language processing (NLP) techniques are becoming increasingly popular in industrial and organizational psychology. One promising area for NLP-based applications is scale development; yet, while many possibilities exist, so far these applications have been restricted—mainly focusing on automated item generation. The current research expands this potential by illustrating an NLP-based approach to content analysis, which manually categorizes scale items by their measured constructs. In NLP, content analysis is performed as a text classification task whereby a model is trained to automatically assign scale items to the construct that they measure. Here, we present an approach to text classification—using state-of-the-art transformer models—that builds upon past approaches. We begin by introducing transformer models and their advantages over alternative methods. Next, we illustrate how to train a transformer to content analyze Big Five personality items. Then, we compare the models trained to human raters, finding that transformer models outperform human raters and several alternative models. Finally, we present practical considerations, limitations, and future research directions.
{"title":"“Transforming” Personality Scale Development: Illustrating the Potential of State-of-the-Art Natural Language Processing","authors":"Shea Fyffe, Philseok Lee, Seth A. Kaplan","doi":"10.1177/10944281231155771","DOIUrl":"https://doi.org/10.1177/10944281231155771","url":null,"abstract":"Natural language processing (NLP) techniques are becoming increasingly popular in industrial and organizational psychology. One promising area for NLP-based applications is scale development; yet, while many possibilities exist, so far these applications have been restricted—mainly focusing on automated item generation. The current research expands this potential by illustrating an NLP-based approach to content analysis, which manually categorizes scale items by their measured constructs. In NLP, content analysis is performed as a text classification task whereby a model is trained to automatically assign scale items to the construct that they measure. Here, we present an approach to text classification—using state-of-the-art transformer models—that builds upon past approaches. We begin by introducing transformer models and their advantages over alternative methods. Next, we illustrate how to train a transformer to content analyze Big Five personality items. Then, we compare the models trained to human raters, finding that transformer models outperform human raters and several alternative models. Finally, we present practical considerations, limitations, and future research directions.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47097386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-03DOI: 10.1177/10944281221145694
Andrew B. Speer, James Perrotta, R. Jacobs
Personality assessments help identify qualified job applicants when making hiring decisions and are used broadly in the organizational sciences. However, many existing personality measures are quite lengthy, and companies and researchers frequently seek ways to shorten personality scales. The current research investigated the effectiveness of a new scale-shortening method called supervised construct scoring (SCS), testing the efficacy of this method across two applied samples. Using a combination of machine learning with content validity considerations, we show that multidimensional personality scales can be significantly shortened while maintaining reliability and validity, and especially when compared to traditional shortening methods. In Study 1, we shortened a 100-item personality assessment of DeYoung et al.'s 10 facets, producing a scale 26% the original length. SCS scores exhibited strong evidence of reliability, convergence with full scale scores, and criterion-related validity. This measure, labeled the Short 10, is made freely available. In Study 2, we applied SCS to shorten an operational police personality assessment. By using SCS, we reduced test length to 25% of the original length while maintaining similar levels of reliability and criterion-related validity when predicting job performance ratings.
{"title":"Supervised Construct Scoring to Reduce Personality Assessment Length: A Field Study and Introduction to the Short 10","authors":"Andrew B. Speer, James Perrotta, R. Jacobs","doi":"10.1177/10944281221145694","DOIUrl":"https://doi.org/10.1177/10944281221145694","url":null,"abstract":"Personality assessments help identify qualified job applicants when making hiring decisions and are used broadly in the organizational sciences. However, many existing personality measures are quite lengthy, and companies and researchers frequently seek ways to shorten personality scales. The current research investigated the effectiveness of a new scale-shortening method called supervised construct scoring (SCS), testing the efficacy of this method across two applied samples. Using a combination of machine learning with content validity considerations, we show that multidimensional personality scales can be significantly shortened while maintaining reliability and validity, and especially when compared to traditional shortening methods. In Study 1, we shortened a 100-item personality assessment of DeYoung et al.'s 10 facets, producing a scale 26% the original length. SCS scores exhibited strong evidence of reliability, convergence with full scale scores, and criterion-related validity. This measure, labeled the Short 10, is made freely available. In Study 2, we applied SCS to shorten an operational police personality assessment. By using SCS, we reduced test length to 25% of the original length while maintaining similar levels of reliability and criterion-related validity when predicting job performance ratings.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48250283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-26DOI: 10.1177/10944281221127292
Sven Kunisch, D. Denyer, J. Bartunek, Markus Menz, Laura B. Cardinal
This article and the related Feature Topic at Organizational Research Methods upcoming were motivated by the concern that despite the bourgeoning number and diversity of review articles, there was a lack of guidance on how to produce rigorous and impactful literature reviews. In this article, we introduce review research as a class of research inquiries that uses prior research as data sources to develop knowledge contributions for academia, practice and policy. We first trace the evolution of review research both outside of and within management including the articles published in this Feature Topic, and provide a holistic definition of review research. Then, we argue that in the plurality of forms of review research, the alignment of purpose and methods is crucial for high-quality review research. To accomplish this, we discuss several review purposes and criteria for assessing review research's rigor and impact, and discuss how these and the review methods need to be aligned with its purpose. Our paper provides guidance for conducting or evaluating review research and helps establish review research as a credible and legitimate scientific endeavor.
{"title":"Review Research as Scientific Inquiry","authors":"Sven Kunisch, D. Denyer, J. Bartunek, Markus Menz, Laura B. Cardinal","doi":"10.1177/10944281221127292","DOIUrl":"https://doi.org/10.1177/10944281221127292","url":null,"abstract":"This article and the related Feature Topic at Organizational Research Methods upcoming were motivated by the concern that despite the bourgeoning number and diversity of review articles, there was a lack of guidance on how to produce rigorous and impactful literature reviews. In this article, we introduce review research as a class of research inquiries that uses prior research as data sources to develop knowledge contributions for academia, practice and policy. We first trace the evolution of review research both outside of and within management including the articles published in this Feature Topic, and provide a holistic definition of review research. Then, we argue that in the plurality of forms of review research, the alignment of purpose and methods is crucial for high-quality review research. To accomplish this, we discuss several review purposes and criteria for assessing review research's rigor and impact, and discuss how these and the review methods need to be aligned with its purpose. Our paper provides guidance for conducting or evaluating review research and helps establish review research as a credible and legitimate scientific endeavor.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"26 1","pages":"3 - 45"},"PeriodicalIF":9.5,"publicationDate":"2022-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43364600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-20DOI: 10.1177/10944281221134104
Man-Nok Wong, D. Kenny, A. Knight
Many topics in organizational research involve examining the interpersonal perceptions and behaviors of group members. The resulting data can be analyzed using the social relations model (SRM). This model enables researchers to address several important questions regarding relational phenomena. In the model, variance can be partitioned into group, actor, partner, and relationship; reciprocity can be assessed in terms of individuals and dyads; and predictors at each of these levels can be analyzed. However, analyzing data using the currently available SRM software can be challenging and can deter organizational researchers from using the model. In this article, we provide a “go-to” introduction to SRM analyses and propose SRM_R ( https://davidakenny.shinyapps.io/SRM_R/ ), an accessible and user-friendly, web-based application for SRM analyses. The basic steps of conducting SRM analyses in the app are illustrated with a sample dataset of 47 teams, 228 members, and 884 dyadic observations, using the participants’ ratings of the advice-seeking behavior of their fellow employees.
{"title":"SRM_R: A Web-Based Shiny App for Social Relations Analyses","authors":"Man-Nok Wong, D. Kenny, A. Knight","doi":"10.1177/10944281221134104","DOIUrl":"https://doi.org/10.1177/10944281221134104","url":null,"abstract":"Many topics in organizational research involve examining the interpersonal perceptions and behaviors of group members. The resulting data can be analyzed using the social relations model (SRM). This model enables researchers to address several important questions regarding relational phenomena. In the model, variance can be partitioned into group, actor, partner, and relationship; reciprocity can be assessed in terms of individuals and dyads; and predictors at each of these levels can be analyzed. However, analyzing data using the currently available SRM software can be challenging and can deter organizational researchers from using the model. In this article, we provide a “go-to” introduction to SRM analyses and propose SRM_R ( https://davidakenny.shinyapps.io/SRM_R/ ), an accessible and user-friendly, web-based application for SRM analyses. The basic steps of conducting SRM analyses in the app are illustrated with a sample dataset of 47 teams, 228 members, and 884 dyadic observations, using the participants’ ratings of the advice-seeking behavior of their fellow employees.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2022-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48039279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-31DOI: 10.1177/10944281221134096
Qian Li
Qualitative researchers often work with texts transcribed from social interactions such as interviews, meetings, and presentations. However, how we make sense of such data to generate promising cues for further analysis is rarely discussed. This article proposes mode-enhanced transcription as a tool for sensitizing social interaction data, defined as a process in which researchers attune their attention to the dynamic interplay of verbal and nonverbal features, expressions, and acts when transcribing and proofreading professional transcripts. Two scenarios for using mode-enhanced transcription are introduced: sensitizing previously collected data and engaging with modes purposefully. Their implications for research focus, data collection, and data analysis are discussed based on a demonstration of the process with a previously collected dataset and an illustrative review of published articles that display mode-enhanced excerpts. The article outlines the benefits and further considerations of using mode-enhanced transcription as a sensitizing tool.
{"title":"Sensitizing Social Interaction with a Mode-Enhanced Transcribing Process","authors":"Qian Li","doi":"10.1177/10944281221134096","DOIUrl":"https://doi.org/10.1177/10944281221134096","url":null,"abstract":"Qualitative researchers often work with texts transcribed from social interactions such as interviews, meetings, and presentations. However, how we make sense of such data to generate promising cues for further analysis is rarely discussed. This article proposes mode-enhanced transcription as a tool for sensitizing social interaction data, defined as a process in which researchers attune their attention to the dynamic interplay of verbal and nonverbal features, expressions, and acts when transcribing and proofreading professional transcripts. Two scenarios for using mode-enhanced transcription are introduced: sensitizing previously collected data and engaging with modes purposefully. Their implications for research focus, data collection, and data analysis are discussed based on a demonstration of the process with a previously collected dataset and an illustrative review of published articles that display mode-enhanced excerpts. The article outlines the benefits and further considerations of using mode-enhanced transcription as a sensitizing tool.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2022-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45729472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-17DOI: 10.1177/10944281221124946
L. J. Williams, Aaron R. Williams, Ernest H. O’Boyle
We review the development of path model fit measures for latent variable models and highlight how they are different from global fit measures. Next, we consider findings from two published simulation articles that reach different conclusions about the effectiveness of one path model fit measure (RMSEA-P). We then report the results of a new simulation study aimed at resolving the questions of whether and how the RMSEA-P should be used by organizational researchers. These results show that the RMSEA-P and its confidence interval is very effective with multiple indicator models at identifying misspecifications across large and small sample sizes and is effective at identifying true models at moderate to large sample sizes. We conclude with recommendations for how the RMSEA-P can be incorporated along with other information into model evaluation.
{"title":"Assessment of Path Model Fit: Evidence of Effectiveness and Recommendations for use of the RMSEA-P","authors":"L. J. Williams, Aaron R. Williams, Ernest H. O’Boyle","doi":"10.1177/10944281221124946","DOIUrl":"https://doi.org/10.1177/10944281221124946","url":null,"abstract":"We review the development of path model fit measures for latent variable models and highlight how they are different from global fit measures. Next, we consider findings from two published simulation articles that reach different conclusions about the effectiveness of one path model fit measure (RMSEA-P). We then report the results of a new simulation study aimed at resolving the questions of whether and how the RMSEA-P should be used by organizational researchers. These results show that the RMSEA-P and its confidence interval is very effective with multiple indicator models at identifying misspecifications across large and small sample sizes and is effective at identifying true models at moderate to large sample sizes. We conclude with recommendations for how the RMSEA-P can be incorporated along with other information into model evaluation.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44950558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-11DOI: 10.1177/10944281221124945
A. Kalnins
Organizational research increasingly tests moderated relationships using multiple regression with interaction terms. Most research does so with little concern regarding curvilinear relationships. But methodologists have established that omitting quadratic terms of correlated primary variables may create false interaction positives (type 1 errors). If dependent variables are generated by the canonical process where fully specified regressions satisfy the Gauss-Markov assumptions, including quadratics solves the problem. But our empirical analysis of published organizational research suggests that dependent variables are often generated by processes where, even with quadratics included, regression analyses will remain Gauss-Markov non-compliant. In such cases, our linear algebraic analysis demonstrates that including quadratics—even those motivated by compelling theory—may exacerbate rather than mitigate the incidence of false interaction positives. The interaction coefficient may substantially change its magnitude and even flip sign once quadratics are included, and not necessarily for the better. We encourage researchers to present two full sets of results when testing moderating hypotheses—one with, and one without, quadratic terms. Researchers should then answer five questions developed here in order to determine the preferable set of results.
{"title":"Should Moderated Regressions Include or Exclude Quadratic Terms? Present Both! Then Apply Our Linear Algebraic Analysis to Identify the Preferable Specification","authors":"A. Kalnins","doi":"10.1177/10944281221124945","DOIUrl":"https://doi.org/10.1177/10944281221124945","url":null,"abstract":"Organizational research increasingly tests moderated relationships using multiple regression with interaction terms. Most research does so with little concern regarding curvilinear relationships. But methodologists have established that omitting quadratic terms of correlated primary variables may create false interaction positives (type 1 errors). If dependent variables are generated by the canonical process where fully specified regressions satisfy the Gauss-Markov assumptions, including quadratics solves the problem. But our empirical analysis of published organizational research suggests that dependent variables are often generated by processes where, even with quadratics included, regression analyses will remain Gauss-Markov non-compliant. In such cases, our linear algebraic analysis demonstrates that including quadratics—even those motivated by compelling theory—may exacerbate rather than mitigate the incidence of false interaction positives. The interaction coefficient may substantially change its magnitude and even flip sign once quadratics are included, and not necessarily for the better. We encourage researchers to present two full sets of results when testing moderating hypotheses—one with, and one without, quadratic terms. Researchers should then answer five questions developed here in order to determine the preferable set of results.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2022-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45785943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-11DOI: 10.1177/10944281221125160
S. Certo, Chunhu Jeon, Kristen A. Raney, Wookyung Lee
We know very little about the performance measures executives use to make decisions. To fill this void, we investigate the performance variables that executives reference in corporate filings with the SEC. Our analyses suggest that executives refer to monetary variables (i.e., revenue, profit, and cash flow) in over 98% of filings. In contrast, executives refer to the unitless performance measures scaled by size (i.e., return on assets, return on equity), which are favored by organizational scholars, in less than 15% of filings. We find that this preference for unscaled measures remains across market capitalization and actual firm performance. In other words, even observations with the highest levels of ROA and ROE are more likely to include monetary measures as opposed to ratios. In fact, we find that almost every observation that references ratios also includes monetary measures of firm performance. Stated differently, our findings suggest executives use ratios in addition to—and not instead of—monetary measures. We discuss research opportunities for scholars to further align with the practitioner perspective and to revisit conceptualizations of firm performance.
{"title":"Measuring What Matters: Assessing how Executives Reference Firm Performance in Corporate Filings","authors":"S. Certo, Chunhu Jeon, Kristen A. Raney, Wookyung Lee","doi":"10.1177/10944281221125160","DOIUrl":"https://doi.org/10.1177/10944281221125160","url":null,"abstract":"We know very little about the performance measures executives use to make decisions. To fill this void, we investigate the performance variables that executives reference in corporate filings with the SEC. Our analyses suggest that executives refer to monetary variables (i.e., revenue, profit, and cash flow) in over 98% of filings. In contrast, executives refer to the unitless performance measures scaled by size (i.e., return on assets, return on equity), which are favored by organizational scholars, in less than 15% of filings. We find that this preference for unscaled measures remains across market capitalization and actual firm performance. In other words, even observations with the highest levels of ROA and ROE are more likely to include monetary measures as opposed to ratios. In fact, we find that almost every observation that references ratios also includes monetary measures of firm performance. Stated differently, our findings suggest executives use ratios in addition to—and not instead of—monetary measures. We discuss research opportunities for scholars to further align with the practitioner perspective and to revisit conceptualizations of firm performance.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2022-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45607947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-21DOI: 10.1177/10944281221124947
L. Valtonen, S. Mäkinen, J. Kirjavainen
Machine learning (ML) enables the analysis of large datasets for pattern discovery. ML methods and the standards for their use have recently attracted increasing attention in organizational research; recent accounts have raised awareness of the importance of transparent ML reporting practices, especially considering the influence of preprocessing and algorithm choice on analytical results. However, efforts made thus far to advance the quality of ML research have failed to consider the special methodological requirements of unsupervised machine learning (UML) separate from the more common supervised machine learning (SML). We confronted these issues by studying a common organizational research dataset of unstructured text and discovered interpretability and representativeness trade-offs between combinations of preprocessing and UML algorithm choices that jeopardize research reproducibility, accountability, and transparency. We highlight the need for contextual justifications to address such issues and offer principles for assessing the contextual suitability of UML choices in research settings.
{"title":"Advancing Reproducibility and Accountability of Unsupervised Machine Learning in Text Mining: Importance of Transparency in Reporting Preprocessing and Algorithm Selection","authors":"L. Valtonen, S. Mäkinen, J. Kirjavainen","doi":"10.1177/10944281221124947","DOIUrl":"https://doi.org/10.1177/10944281221124947","url":null,"abstract":"Machine learning (ML) enables the analysis of large datasets for pattern discovery. ML methods and the standards for their use have recently attracted increasing attention in organizational research; recent accounts have raised awareness of the importance of transparent ML reporting practices, especially considering the influence of preprocessing and algorithm choice on analytical results. However, efforts made thus far to advance the quality of ML research have failed to consider the special methodological requirements of unsupervised machine learning (UML) separate from the more common supervised machine learning (SML). We confronted these issues by studying a common organizational research dataset of unstructured text and discovered interpretability and representativeness trade-offs between combinations of preprocessing and UML algorithm choices that jeopardize research reproducibility, accountability, and transparency. We highlight the need for contextual justifications to address such issues and offer principles for assessing the contextual suitability of UML choices in research settings.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44146497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-31DOI: 10.1177/10944281221120541
Christopher D. Nye
Confirmatory factor analyses (CFA) are widely used in the organizational literature. As a result, understanding how to properly conduct these analyses, report the results, and interpret their implications is critically important for advancing organizational research. The goal of this paper is to summarize the complexities of CFA models and, therefore, to provide a resource for journal reviewers and researchers who are using CFA in their research. The topics covered in this paper include the estimation process, power analyses, model fit, and model modifications, among other things. In addition, this paper concludes with a checklist that summarizes the key points that are discussed and can be used to evaluate future studies that incorporate CFA.
{"title":"Reviewer Resources: Confirmatory Factor Analysis","authors":"Christopher D. Nye","doi":"10.1177/10944281221120541","DOIUrl":"https://doi.org/10.1177/10944281221120541","url":null,"abstract":"Confirmatory factor analyses (CFA) are widely used in the organizational literature. As a result, understanding how to properly conduct these analyses, report the results, and interpret their implications is critically important for advancing organizational research. The goal of this paper is to summarize the complexities of CFA models and, therefore, to provide a resource for journal reviewers and researchers who are using CFA in their research. The topics covered in this paper include the estimation process, power analyses, model fit, and model modifications, among other things. In addition, this paper concludes with a checklist that summarizes the key points that are discussed and can be used to evaluate future studies that incorporate CFA.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2022-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47697424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}