Pub Date : 2023-05-03DOI: 10.1177/10944281231166656
Kyle J. Emich, M. McCourt, Li Lu, Amanda J. Ferguson, R. Peterson
The attribute alignment approach to team composition allows researchers to assess variation in team member attributes, which occurs simultaneously within and across individual team members. This approach facilitates the development of theory testing the proposition that individual members are themselves complex systems comprised of multiple attributes and that the configuration of those attributes affects team-level processes and outcomes. Here, we expand this approach, originally developed for two attributes, by describing three ways researchers may capture the alignment of three or more team member attributes: (a) a geometric approach, (b) a physical approach accentuating ideal alignment, and (c) an algebraic approach accentuating the direction (as opposed to magnitude) of alignment. We also provide examples of the research questions each could answer and compare the methods empirically using a synthetic dataset assessing 100 teams of three to seven members across four attributes. Then, we provide a practical guide to selecting an appropriate method when considering team-member attribute patterns by answering several common questions regarding applying attribute alignment. Finally, we provide code ( https://github.com/kjem514/Attribute-Alignment-Code ) and apply this approach to a field data set in our appendices.
{"title":"Team Composition Revisited: Expanding the Team Member Attribute Alignment Approach to Consider Patterns of More Than Two Attributes","authors":"Kyle J. Emich, M. McCourt, Li Lu, Amanda J. Ferguson, R. Peterson","doi":"10.1177/10944281231166656","DOIUrl":"https://doi.org/10.1177/10944281231166656","url":null,"abstract":"The attribute alignment approach to team composition allows researchers to assess variation in team member attributes, which occurs simultaneously within and across individual team members. This approach facilitates the development of theory testing the proposition that individual members are themselves complex systems comprised of multiple attributes and that the configuration of those attributes affects team-level processes and outcomes. Here, we expand this approach, originally developed for two attributes, by describing three ways researchers may capture the alignment of three or more team member attributes: (a) a geometric approach, (b) a physical approach accentuating ideal alignment, and (c) an algebraic approach accentuating the direction (as opposed to magnitude) of alignment. We also provide examples of the research questions each could answer and compare the methods empirically using a synthetic dataset assessing 100 teams of three to seven members across four attributes. Then, we provide a practical guide to selecting an appropriate method when considering team-member attribute patterns by answering several common questions regarding applying attribute alignment. Finally, we provide code ( https://github.com/kjem514/Attribute-Alignment-Code ) and apply this approach to a field data set in our appendices.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2023-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43887552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-17DOI: 10.1177/10944281231166649
Christina Hoon, Alina M. Baluch
The impact and relevance of our discipline's research is determined by its ability to engage the big questions of the grand challenges we face today. Our central argument is that we need innovative methods that engage large-scope phenomena, not least because these phenomena benefit from going beyond individual study design. We introduce the concept of macro-iterativity which involves multiple iterations that move between, and link across, a set of research cycles. We offer a multi-arc research design that comprises the discovery arc and extension arc and three extension logics through which scholars can combine these arcs of inquiry in a coherent way. Based on this research design, we develop a roadmap that guides scholars through the four steps of how to engage in multi-arc research along with the main techniques and outputs. We argue that a multi-arc design supports the move toward more generative theorizing that is required for researching problems dealing with the complex issues and big questions of our time.
{"title":"Macro-iterativity: A Qualitative Multi-arc Design for Studying Complex Issues and Big Questions","authors":"Christina Hoon, Alina M. Baluch","doi":"10.1177/10944281231166649","DOIUrl":"https://doi.org/10.1177/10944281231166649","url":null,"abstract":"The impact and relevance of our discipline's research is determined by its ability to engage the big questions of the grand challenges we face today. Our central argument is that we need innovative methods that engage large-scope phenomena, not least because these phenomena benefit from going beyond individual study design. We introduce the concept of macro-iterativity which involves multiple iterations that move between, and link across, a set of research cycles. We offer a multi-arc research design that comprises the discovery arc and extension arc and three extension logics through which scholars can combine these arcs of inquiry in a coherent way. Based on this research design, we develop a roadmap that guides scholars through the four steps of how to engage in multi-arc research along with the main techniques and outputs. We argue that a multi-arc design supports the move toward more generative theorizing that is required for researching problems dealing with the complex issues and big questions of our time.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"1 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2023-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41372087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-06DOI: 10.1177/10944281231155771
Shea Fyffe, Philseok Lee, Seth A. Kaplan
Natural language processing (NLP) techniques are becoming increasingly popular in industrial and organizational psychology. One promising area for NLP-based applications is scale development; yet, while many possibilities exist, so far these applications have been restricted—mainly focusing on automated item generation. The current research expands this potential by illustrating an NLP-based approach to content analysis, which manually categorizes scale items by their measured constructs. In NLP, content analysis is performed as a text classification task whereby a model is trained to automatically assign scale items to the construct that they measure. Here, we present an approach to text classification—using state-of-the-art transformer models—that builds upon past approaches. We begin by introducing transformer models and their advantages over alternative methods. Next, we illustrate how to train a transformer to content analyze Big Five personality items. Then, we compare the models trained to human raters, finding that transformer models outperform human raters and several alternative models. Finally, we present practical considerations, limitations, and future research directions.
{"title":"“Transforming” Personality Scale Development: Illustrating the Potential of State-of-the-Art Natural Language Processing","authors":"Shea Fyffe, Philseok Lee, Seth A. Kaplan","doi":"10.1177/10944281231155771","DOIUrl":"https://doi.org/10.1177/10944281231155771","url":null,"abstract":"Natural language processing (NLP) techniques are becoming increasingly popular in industrial and organizational psychology. One promising area for NLP-based applications is scale development; yet, while many possibilities exist, so far these applications have been restricted—mainly focusing on automated item generation. The current research expands this potential by illustrating an NLP-based approach to content analysis, which manually categorizes scale items by their measured constructs. In NLP, content analysis is performed as a text classification task whereby a model is trained to automatically assign scale items to the construct that they measure. Here, we present an approach to text classification—using state-of-the-art transformer models—that builds upon past approaches. We begin by introducing transformer models and their advantages over alternative methods. Next, we illustrate how to train a transformer to content analyze Big Five personality items. Then, we compare the models trained to human raters, finding that transformer models outperform human raters and several alternative models. Finally, we present practical considerations, limitations, and future research directions.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47097386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-03DOI: 10.1177/10944281221145694
Andrew B. Speer, James Perrotta, R. Jacobs
Personality assessments help identify qualified job applicants when making hiring decisions and are used broadly in the organizational sciences. However, many existing personality measures are quite lengthy, and companies and researchers frequently seek ways to shorten personality scales. The current research investigated the effectiveness of a new scale-shortening method called supervised construct scoring (SCS), testing the efficacy of this method across two applied samples. Using a combination of machine learning with content validity considerations, we show that multidimensional personality scales can be significantly shortened while maintaining reliability and validity, and especially when compared to traditional shortening methods. In Study 1, we shortened a 100-item personality assessment of DeYoung et al.'s 10 facets, producing a scale 26% the original length. SCS scores exhibited strong evidence of reliability, convergence with full scale scores, and criterion-related validity. This measure, labeled the Short 10, is made freely available. In Study 2, we applied SCS to shorten an operational police personality assessment. By using SCS, we reduced test length to 25% of the original length while maintaining similar levels of reliability and criterion-related validity when predicting job performance ratings.
{"title":"Supervised Construct Scoring to Reduce Personality Assessment Length: A Field Study and Introduction to the Short 10","authors":"Andrew B. Speer, James Perrotta, R. Jacobs","doi":"10.1177/10944281221145694","DOIUrl":"https://doi.org/10.1177/10944281221145694","url":null,"abstract":"Personality assessments help identify qualified job applicants when making hiring decisions and are used broadly in the organizational sciences. However, many existing personality measures are quite lengthy, and companies and researchers frequently seek ways to shorten personality scales. The current research investigated the effectiveness of a new scale-shortening method called supervised construct scoring (SCS), testing the efficacy of this method across two applied samples. Using a combination of machine learning with content validity considerations, we show that multidimensional personality scales can be significantly shortened while maintaining reliability and validity, and especially when compared to traditional shortening methods. In Study 1, we shortened a 100-item personality assessment of DeYoung et al.'s 10 facets, producing a scale 26% the original length. SCS scores exhibited strong evidence of reliability, convergence with full scale scores, and criterion-related validity. This measure, labeled the Short 10, is made freely available. In Study 2, we applied SCS to shorten an operational police personality assessment. By using SCS, we reduced test length to 25% of the original length while maintaining similar levels of reliability and criterion-related validity when predicting job performance ratings.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48250283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-26DOI: 10.1177/10944281221127292
Sven Kunisch, D. Denyer, J. Bartunek, Markus Menz, Laura B. Cardinal
This article and the related Feature Topic at Organizational Research Methods upcoming were motivated by the concern that despite the bourgeoning number and diversity of review articles, there was a lack of guidance on how to produce rigorous and impactful literature reviews. In this article, we introduce review research as a class of research inquiries that uses prior research as data sources to develop knowledge contributions for academia, practice and policy. We first trace the evolution of review research both outside of and within management including the articles published in this Feature Topic, and provide a holistic definition of review research. Then, we argue that in the plurality of forms of review research, the alignment of purpose and methods is crucial for high-quality review research. To accomplish this, we discuss several review purposes and criteria for assessing review research's rigor and impact, and discuss how these and the review methods need to be aligned with its purpose. Our paper provides guidance for conducting or evaluating review research and helps establish review research as a credible and legitimate scientific endeavor.
{"title":"Review Research as Scientific Inquiry","authors":"Sven Kunisch, D. Denyer, J. Bartunek, Markus Menz, Laura B. Cardinal","doi":"10.1177/10944281221127292","DOIUrl":"https://doi.org/10.1177/10944281221127292","url":null,"abstract":"This article and the related Feature Topic at Organizational Research Methods upcoming were motivated by the concern that despite the bourgeoning number and diversity of review articles, there was a lack of guidance on how to produce rigorous and impactful literature reviews. In this article, we introduce review research as a class of research inquiries that uses prior research as data sources to develop knowledge contributions for academia, practice and policy. We first trace the evolution of review research both outside of and within management including the articles published in this Feature Topic, and provide a holistic definition of review research. Then, we argue that in the plurality of forms of review research, the alignment of purpose and methods is crucial for high-quality review research. To accomplish this, we discuss several review purposes and criteria for assessing review research's rigor and impact, and discuss how these and the review methods need to be aligned with its purpose. Our paper provides guidance for conducting or evaluating review research and helps establish review research as a credible and legitimate scientific endeavor.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"26 1","pages":"3 - 45"},"PeriodicalIF":9.5,"publicationDate":"2022-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43364600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-20DOI: 10.1177/10944281221134104
Man-Nok Wong, D. Kenny, A. Knight
Many topics in organizational research involve examining the interpersonal perceptions and behaviors of group members. The resulting data can be analyzed using the social relations model (SRM). This model enables researchers to address several important questions regarding relational phenomena. In the model, variance can be partitioned into group, actor, partner, and relationship; reciprocity can be assessed in terms of individuals and dyads; and predictors at each of these levels can be analyzed. However, analyzing data using the currently available SRM software can be challenging and can deter organizational researchers from using the model. In this article, we provide a “go-to” introduction to SRM analyses and propose SRM_R ( https://davidakenny.shinyapps.io/SRM_R/ ), an accessible and user-friendly, web-based application for SRM analyses. The basic steps of conducting SRM analyses in the app are illustrated with a sample dataset of 47 teams, 228 members, and 884 dyadic observations, using the participants’ ratings of the advice-seeking behavior of their fellow employees.
{"title":"SRM_R: A Web-Based Shiny App for Social Relations Analyses","authors":"Man-Nok Wong, D. Kenny, A. Knight","doi":"10.1177/10944281221134104","DOIUrl":"https://doi.org/10.1177/10944281221134104","url":null,"abstract":"Many topics in organizational research involve examining the interpersonal perceptions and behaviors of group members. The resulting data can be analyzed using the social relations model (SRM). This model enables researchers to address several important questions regarding relational phenomena. In the model, variance can be partitioned into group, actor, partner, and relationship; reciprocity can be assessed in terms of individuals and dyads; and predictors at each of these levels can be analyzed. However, analyzing data using the currently available SRM software can be challenging and can deter organizational researchers from using the model. In this article, we provide a “go-to” introduction to SRM analyses and propose SRM_R ( https://davidakenny.shinyapps.io/SRM_R/ ), an accessible and user-friendly, web-based application for SRM analyses. The basic steps of conducting SRM analyses in the app are illustrated with a sample dataset of 47 teams, 228 members, and 884 dyadic observations, using the participants’ ratings of the advice-seeking behavior of their fellow employees.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2022-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48039279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-31DOI: 10.1177/10944281221134096
Qian Li
Qualitative researchers often work with texts transcribed from social interactions such as interviews, meetings, and presentations. However, how we make sense of such data to generate promising cues for further analysis is rarely discussed. This article proposes mode-enhanced transcription as a tool for sensitizing social interaction data, defined as a process in which researchers attune their attention to the dynamic interplay of verbal and nonverbal features, expressions, and acts when transcribing and proofreading professional transcripts. Two scenarios for using mode-enhanced transcription are introduced: sensitizing previously collected data and engaging with modes purposefully. Their implications for research focus, data collection, and data analysis are discussed based on a demonstration of the process with a previously collected dataset and an illustrative review of published articles that display mode-enhanced excerpts. The article outlines the benefits and further considerations of using mode-enhanced transcription as a sensitizing tool.
{"title":"Sensitizing Social Interaction with a Mode-Enhanced Transcribing Process","authors":"Qian Li","doi":"10.1177/10944281221134096","DOIUrl":"https://doi.org/10.1177/10944281221134096","url":null,"abstract":"Qualitative researchers often work with texts transcribed from social interactions such as interviews, meetings, and presentations. However, how we make sense of such data to generate promising cues for further analysis is rarely discussed. This article proposes mode-enhanced transcription as a tool for sensitizing social interaction data, defined as a process in which researchers attune their attention to the dynamic interplay of verbal and nonverbal features, expressions, and acts when transcribing and proofreading professional transcripts. Two scenarios for using mode-enhanced transcription are introduced: sensitizing previously collected data and engaging with modes purposefully. Their implications for research focus, data collection, and data analysis are discussed based on a demonstration of the process with a previously collected dataset and an illustrative review of published articles that display mode-enhanced excerpts. The article outlines the benefits and further considerations of using mode-enhanced transcription as a sensitizing tool.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2022-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45729472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-17DOI: 10.1177/10944281221124946
L. J. Williams, Aaron R. Williams, Ernest H. O’Boyle
We review the development of path model fit measures for latent variable models and highlight how they are different from global fit measures. Next, we consider findings from two published simulation articles that reach different conclusions about the effectiveness of one path model fit measure (RMSEA-P). We then report the results of a new simulation study aimed at resolving the questions of whether and how the RMSEA-P should be used by organizational researchers. These results show that the RMSEA-P and its confidence interval is very effective with multiple indicator models at identifying misspecifications across large and small sample sizes and is effective at identifying true models at moderate to large sample sizes. We conclude with recommendations for how the RMSEA-P can be incorporated along with other information into model evaluation.
{"title":"Assessment of Path Model Fit: Evidence of Effectiveness and Recommendations for use of the RMSEA-P","authors":"L. J. Williams, Aaron R. Williams, Ernest H. O’Boyle","doi":"10.1177/10944281221124946","DOIUrl":"https://doi.org/10.1177/10944281221124946","url":null,"abstract":"We review the development of path model fit measures for latent variable models and highlight how they are different from global fit measures. Next, we consider findings from two published simulation articles that reach different conclusions about the effectiveness of one path model fit measure (RMSEA-P). We then report the results of a new simulation study aimed at resolving the questions of whether and how the RMSEA-P should be used by organizational researchers. These results show that the RMSEA-P and its confidence interval is very effective with multiple indicator models at identifying misspecifications across large and small sample sizes and is effective at identifying true models at moderate to large sample sizes. We conclude with recommendations for how the RMSEA-P can be incorporated along with other information into model evaluation.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44950558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-11DOI: 10.1177/10944281221124945
A. Kalnins
Organizational research increasingly tests moderated relationships using multiple regression with interaction terms. Most research does so with little concern regarding curvilinear relationships. But methodologists have established that omitting quadratic terms of correlated primary variables may create false interaction positives (type 1 errors). If dependent variables are generated by the canonical process where fully specified regressions satisfy the Gauss-Markov assumptions, including quadratics solves the problem. But our empirical analysis of published organizational research suggests that dependent variables are often generated by processes where, even with quadratics included, regression analyses will remain Gauss-Markov non-compliant. In such cases, our linear algebraic analysis demonstrates that including quadratics—even those motivated by compelling theory—may exacerbate rather than mitigate the incidence of false interaction positives. The interaction coefficient may substantially change its magnitude and even flip sign once quadratics are included, and not necessarily for the better. We encourage researchers to present two full sets of results when testing moderating hypotheses—one with, and one without, quadratic terms. Researchers should then answer five questions developed here in order to determine the preferable set of results.
{"title":"Should Moderated Regressions Include or Exclude Quadratic Terms? Present Both! Then Apply Our Linear Algebraic Analysis to Identify the Preferable Specification","authors":"A. Kalnins","doi":"10.1177/10944281221124945","DOIUrl":"https://doi.org/10.1177/10944281221124945","url":null,"abstract":"Organizational research increasingly tests moderated relationships using multiple regression with interaction terms. Most research does so with little concern regarding curvilinear relationships. But methodologists have established that omitting quadratic terms of correlated primary variables may create false interaction positives (type 1 errors). If dependent variables are generated by the canonical process where fully specified regressions satisfy the Gauss-Markov assumptions, including quadratics solves the problem. But our empirical analysis of published organizational research suggests that dependent variables are often generated by processes where, even with quadratics included, regression analyses will remain Gauss-Markov non-compliant. In such cases, our linear algebraic analysis demonstrates that including quadratics—even those motivated by compelling theory—may exacerbate rather than mitigate the incidence of false interaction positives. The interaction coefficient may substantially change its magnitude and even flip sign once quadratics are included, and not necessarily for the better. We encourage researchers to present two full sets of results when testing moderating hypotheses—one with, and one without, quadratic terms. Researchers should then answer five questions developed here in order to determine the preferable set of results.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2022-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45785943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}