Pub Date : 2023-06-14DOI: 10.1177/10944281231175904
Amal Chekili, Ivan Hernandez
Gender and ethnicity are increasingly studied topics within I-O psychology, helpful for understanding the composition of collectives, experiences of marginalized group members, and differences in outcomes between demographics and capturing diversity at higher levels. However, the absence of explicit, structured, demographic information online makes applying these research questions to Big Data sources challenging. We highlight how deep neural networks can be used to infer demographics based on people's names, which are commonly found online (e.g., social media profiles, employee pages, and membership rosters), using broad international data to train and evaluate the effectiveness of these models and find that validity coefficients meet minimum reliability thresholds at the individual level ( rgender = .91, rethnicity = .80) highlighting their ability to contextualize and facilitate Big Data research. Using empirical data extracted from databases, websites, and mobile apps, we highlight how these models can be applied to large organizational data sets by presenting illustrative demonstrations of research questions that incorporate the information provided by the model. To promote broader usage, we offer an online application to infer demographics from names without requiring advanced programming knowledge.
{"title":"Demographic Inference in the Digital Age: Using Neural Networks to Assess Gender and Ethnicity at Scale","authors":"Amal Chekili, Ivan Hernandez","doi":"10.1177/10944281231175904","DOIUrl":"https://doi.org/10.1177/10944281231175904","url":null,"abstract":"Gender and ethnicity are increasingly studied topics within I-O psychology, helpful for understanding the composition of collectives, experiences of marginalized group members, and differences in outcomes between demographics and capturing diversity at higher levels. However, the absence of explicit, structured, demographic information online makes applying these research questions to Big Data sources challenging. We highlight how deep neural networks can be used to infer demographics based on people's names, which are commonly found online (e.g., social media profiles, employee pages, and membership rosters), using broad international data to train and evaluate the effectiveness of these models and find that validity coefficients meet minimum reliability thresholds at the individual level ( rgender = .91, rethnicity = .80) highlighting their ability to contextualize and facilitate Big Data research. Using empirical data extracted from databases, websites, and mobile apps, we highlight how these models can be applied to large organizational data sets by presenting illustrative demonstrations of research questions that incorporate the information provided by the model. To promote broader usage, we offer an online application to infer demographics from names without requiring advanced programming knowledge.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46027719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-19DOI: 10.1177/10944281231169942
S. Kepes, Wenhao Wang, J. Cortina
Heterogeneity refers to the variability in effect sizes across different samples and is one of the major criteria to judge the importance and advancement of a scientific area. To determine how studies in the organizational sciences address heterogeneity, we conduct two studies. In study 1, we examine how meta-analytic studies conduct heterogeneity assessments and report and interpret the obtained results. To do so, we coded heterogeneity-related information from meta-analytic studies published in five leading journals. We found that most meta-analytic studies report several heterogeneity statistics. At the same time, however, there tends to be a lack of detail and thoroughness in the interpretation of these statistics. In study 2, we review how primary studies report heterogeneity-related results and conclusions from meta-analyses. We found that the quality of the reporting of heterogeneity-related information in primary studies tends to be poor and unrelated to the detail and thoroughness with which meta-analytic studies report and interpret the statistics. Based on our findings, we discuss implications for practice and provide recommendations for how heterogeneity assessments should be conducted and communicated in future research.
{"title":"Heterogeneity in Meta-Analytic Effect Sizes: An Assessment of the Current State of the Literature","authors":"S. Kepes, Wenhao Wang, J. Cortina","doi":"10.1177/10944281231169942","DOIUrl":"https://doi.org/10.1177/10944281231169942","url":null,"abstract":"Heterogeneity refers to the variability in effect sizes across different samples and is one of the major criteria to judge the importance and advancement of a scientific area. To determine how studies in the organizational sciences address heterogeneity, we conduct two studies. In study 1, we examine how meta-analytic studies conduct heterogeneity assessments and report and interpret the obtained results. To do so, we coded heterogeneity-related information from meta-analytic studies published in five leading journals. We found that most meta-analytic studies report several heterogeneity statistics. At the same time, however, there tends to be a lack of detail and thoroughness in the interpretation of these statistics. In study 2, we review how primary studies report heterogeneity-related results and conclusions from meta-analyses. We found that the quality of the reporting of heterogeneity-related information in primary studies tends to be poor and unrelated to the detail and thoroughness with which meta-analytic studies report and interpret the statistics. Based on our findings, we discuss implications for practice and provide recommendations for how heterogeneity assessments should be conducted and communicated in future research.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2023-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43622205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-08DOI: 10.1177/10944281231169943
Juan I. Sanchez, Chen Wang, A. Ponnapalli, Hock-Peng Sin, Le Xu, M. Lapeira, Mohan Song
Mediation analysis tests X → M → Y processes in which an independent variable ( X) exerts an indirect effect on a dependent variable ( Y) through its influence on an intervening or mediator variable ( M). A preponderance of mediation studies, however, focuses on determining solely whether mediation effects are statistically significant, instead of focusing on what the results tell us about potential theoretical refinements in the mediation model. We argue in favor of employing a set of three standardized effect sizes based on variance proportions that allow researchers to compare their results with those of other mediation studies employing similar combinations of X, M, and Y variables. These standardized effect sizes constitute a set of common metrics signaling potential gaps in a mediation model, and as such provide useful insights for the theoretical refinement of mediation models in organizational research. We illustrate the utility of comparing these common-metric effect sizes using the examples of abusive and transformational leadership effects on employee outcomes as transmitted by social exchange quality.
{"title":"Assessing Common-Metric Effect Sizes to Refine Mediation Models","authors":"Juan I. Sanchez, Chen Wang, A. Ponnapalli, Hock-Peng Sin, Le Xu, M. Lapeira, Mohan Song","doi":"10.1177/10944281231169943","DOIUrl":"https://doi.org/10.1177/10944281231169943","url":null,"abstract":"Mediation analysis tests X → M → Y processes in which an independent variable ( X) exerts an indirect effect on a dependent variable ( Y) through its influence on an intervening or mediator variable ( M). A preponderance of mediation studies, however, focuses on determining solely whether mediation effects are statistically significant, instead of focusing on what the results tell us about potential theoretical refinements in the mediation model. We argue in favor of employing a set of three standardized effect sizes based on variance proportions that allow researchers to compare their results with those of other mediation studies employing similar combinations of X, M, and Y variables. These standardized effect sizes constitute a set of common metrics signaling potential gaps in a mediation model, and as such provide useful insights for the theoretical refinement of mediation models in organizational research. We illustrate the utility of comparing these common-metric effect sizes using the examples of abusive and transformational leadership effects on employee outcomes as transmitted by social exchange quality.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48021064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-07DOI: 10.1177/10944281231167839
S. Trevis Certo, Kristen Raney, Latifa Albader, John R. Busenbark
Organizational researchers have increasingly noted the problems associated with nonnormal dependent variable distributions. Most of this scholarship focuses on variables with positive values and lo...
{"title":"Out of Shape: The Implications of (Extremely) Nonnormal Dependent Variables","authors":"S. Trevis Certo, Kristen Raney, Latifa Albader, John R. Busenbark","doi":"10.1177/10944281231167839","DOIUrl":"https://doi.org/10.1177/10944281231167839","url":null,"abstract":"Organizational researchers have increasingly noted the problems associated with nonnormal dependent variable distributions. Most of this scholarship focuses on variables with positive values and lo...","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"109 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2023-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50165318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-03DOI: 10.1177/10944281231166656
Kyle J. Emich, M. McCourt, Li Lu, Amanda J. Ferguson, R. Peterson
The attribute alignment approach to team composition allows researchers to assess variation in team member attributes, which occurs simultaneously within and across individual team members. This approach facilitates the development of theory testing the proposition that individual members are themselves complex systems comprised of multiple attributes and that the configuration of those attributes affects team-level processes and outcomes. Here, we expand this approach, originally developed for two attributes, by describing three ways researchers may capture the alignment of three or more team member attributes: (a) a geometric approach, (b) a physical approach accentuating ideal alignment, and (c) an algebraic approach accentuating the direction (as opposed to magnitude) of alignment. We also provide examples of the research questions each could answer and compare the methods empirically using a synthetic dataset assessing 100 teams of three to seven members across four attributes. Then, we provide a practical guide to selecting an appropriate method when considering team-member attribute patterns by answering several common questions regarding applying attribute alignment. Finally, we provide code ( https://github.com/kjem514/Attribute-Alignment-Code ) and apply this approach to a field data set in our appendices.
{"title":"Team Composition Revisited: Expanding the Team Member Attribute Alignment Approach to Consider Patterns of More Than Two Attributes","authors":"Kyle J. Emich, M. McCourt, Li Lu, Amanda J. Ferguson, R. Peterson","doi":"10.1177/10944281231166656","DOIUrl":"https://doi.org/10.1177/10944281231166656","url":null,"abstract":"The attribute alignment approach to team composition allows researchers to assess variation in team member attributes, which occurs simultaneously within and across individual team members. This approach facilitates the development of theory testing the proposition that individual members are themselves complex systems comprised of multiple attributes and that the configuration of those attributes affects team-level processes and outcomes. Here, we expand this approach, originally developed for two attributes, by describing three ways researchers may capture the alignment of three or more team member attributes: (a) a geometric approach, (b) a physical approach accentuating ideal alignment, and (c) an algebraic approach accentuating the direction (as opposed to magnitude) of alignment. We also provide examples of the research questions each could answer and compare the methods empirically using a synthetic dataset assessing 100 teams of three to seven members across four attributes. Then, we provide a practical guide to selecting an appropriate method when considering team-member attribute patterns by answering several common questions regarding applying attribute alignment. Finally, we provide code ( https://github.com/kjem514/Attribute-Alignment-Code ) and apply this approach to a field data set in our appendices.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2023-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43887552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-17DOI: 10.1177/10944281231166649
Christina Hoon, Alina M. Baluch
The impact and relevance of our discipline's research is determined by its ability to engage the big questions of the grand challenges we face today. Our central argument is that we need innovative methods that engage large-scope phenomena, not least because these phenomena benefit from going beyond individual study design. We introduce the concept of macro-iterativity which involves multiple iterations that move between, and link across, a set of research cycles. We offer a multi-arc research design that comprises the discovery arc and extension arc and three extension logics through which scholars can combine these arcs of inquiry in a coherent way. Based on this research design, we develop a roadmap that guides scholars through the four steps of how to engage in multi-arc research along with the main techniques and outputs. We argue that a multi-arc design supports the move toward more generative theorizing that is required for researching problems dealing with the complex issues and big questions of our time.
{"title":"Macro-iterativity: A Qualitative Multi-arc Design for Studying Complex Issues and Big Questions","authors":"Christina Hoon, Alina M. Baluch","doi":"10.1177/10944281231166649","DOIUrl":"https://doi.org/10.1177/10944281231166649","url":null,"abstract":"The impact and relevance of our discipline's research is determined by its ability to engage the big questions of the grand challenges we face today. Our central argument is that we need innovative methods that engage large-scope phenomena, not least because these phenomena benefit from going beyond individual study design. We introduce the concept of macro-iterativity which involves multiple iterations that move between, and link across, a set of research cycles. We offer a multi-arc research design that comprises the discovery arc and extension arc and three extension logics through which scholars can combine these arcs of inquiry in a coherent way. Based on this research design, we develop a roadmap that guides scholars through the four steps of how to engage in multi-arc research along with the main techniques and outputs. We argue that a multi-arc design supports the move toward more generative theorizing that is required for researching problems dealing with the complex issues and big questions of our time.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"1 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2023-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41372087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-06DOI: 10.1177/10944281231155771
Shea Fyffe, Philseok Lee, Seth A. Kaplan
Natural language processing (NLP) techniques are becoming increasingly popular in industrial and organizational psychology. One promising area for NLP-based applications is scale development; yet, while many possibilities exist, so far these applications have been restricted—mainly focusing on automated item generation. The current research expands this potential by illustrating an NLP-based approach to content analysis, which manually categorizes scale items by their measured constructs. In NLP, content analysis is performed as a text classification task whereby a model is trained to automatically assign scale items to the construct that they measure. Here, we present an approach to text classification—using state-of-the-art transformer models—that builds upon past approaches. We begin by introducing transformer models and their advantages over alternative methods. Next, we illustrate how to train a transformer to content analyze Big Five personality items. Then, we compare the models trained to human raters, finding that transformer models outperform human raters and several alternative models. Finally, we present practical considerations, limitations, and future research directions.
{"title":"“Transforming” Personality Scale Development: Illustrating the Potential of State-of-the-Art Natural Language Processing","authors":"Shea Fyffe, Philseok Lee, Seth A. Kaplan","doi":"10.1177/10944281231155771","DOIUrl":"https://doi.org/10.1177/10944281231155771","url":null,"abstract":"Natural language processing (NLP) techniques are becoming increasingly popular in industrial and organizational psychology. One promising area for NLP-based applications is scale development; yet, while many possibilities exist, so far these applications have been restricted—mainly focusing on automated item generation. The current research expands this potential by illustrating an NLP-based approach to content analysis, which manually categorizes scale items by their measured constructs. In NLP, content analysis is performed as a text classification task whereby a model is trained to automatically assign scale items to the construct that they measure. Here, we present an approach to text classification—using state-of-the-art transformer models—that builds upon past approaches. We begin by introducing transformer models and their advantages over alternative methods. Next, we illustrate how to train a transformer to content analyze Big Five personality items. Then, we compare the models trained to human raters, finding that transformer models outperform human raters and several alternative models. Finally, we present practical considerations, limitations, and future research directions.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47097386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-03DOI: 10.1177/10944281221145694
Andrew B. Speer, James Perrotta, R. Jacobs
Personality assessments help identify qualified job applicants when making hiring decisions and are used broadly in the organizational sciences. However, many existing personality measures are quite lengthy, and companies and researchers frequently seek ways to shorten personality scales. The current research investigated the effectiveness of a new scale-shortening method called supervised construct scoring (SCS), testing the efficacy of this method across two applied samples. Using a combination of machine learning with content validity considerations, we show that multidimensional personality scales can be significantly shortened while maintaining reliability and validity, and especially when compared to traditional shortening methods. In Study 1, we shortened a 100-item personality assessment of DeYoung et al.'s 10 facets, producing a scale 26% the original length. SCS scores exhibited strong evidence of reliability, convergence with full scale scores, and criterion-related validity. This measure, labeled the Short 10, is made freely available. In Study 2, we applied SCS to shorten an operational police personality assessment. By using SCS, we reduced test length to 25% of the original length while maintaining similar levels of reliability and criterion-related validity when predicting job performance ratings.
{"title":"Supervised Construct Scoring to Reduce Personality Assessment Length: A Field Study and Introduction to the Short 10","authors":"Andrew B. Speer, James Perrotta, R. Jacobs","doi":"10.1177/10944281221145694","DOIUrl":"https://doi.org/10.1177/10944281221145694","url":null,"abstract":"Personality assessments help identify qualified job applicants when making hiring decisions and are used broadly in the organizational sciences. However, many existing personality measures are quite lengthy, and companies and researchers frequently seek ways to shorten personality scales. The current research investigated the effectiveness of a new scale-shortening method called supervised construct scoring (SCS), testing the efficacy of this method across two applied samples. Using a combination of machine learning with content validity considerations, we show that multidimensional personality scales can be significantly shortened while maintaining reliability and validity, and especially when compared to traditional shortening methods. In Study 1, we shortened a 100-item personality assessment of DeYoung et al.'s 10 facets, producing a scale 26% the original length. SCS scores exhibited strong evidence of reliability, convergence with full scale scores, and criterion-related validity. This measure, labeled the Short 10, is made freely available. In Study 2, we applied SCS to shorten an operational police personality assessment. By using SCS, we reduced test length to 25% of the original length while maintaining similar levels of reliability and criterion-related validity when predicting job performance ratings.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48250283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-26DOI: 10.1177/10944281221127292
Sven Kunisch, D. Denyer, J. Bartunek, Markus Menz, Laura B. Cardinal
This article and the related Feature Topic at Organizational Research Methods upcoming were motivated by the concern that despite the bourgeoning number and diversity of review articles, there was a lack of guidance on how to produce rigorous and impactful literature reviews. In this article, we introduce review research as a class of research inquiries that uses prior research as data sources to develop knowledge contributions for academia, practice and policy. We first trace the evolution of review research both outside of and within management including the articles published in this Feature Topic, and provide a holistic definition of review research. Then, we argue that in the plurality of forms of review research, the alignment of purpose and methods is crucial for high-quality review research. To accomplish this, we discuss several review purposes and criteria for assessing review research's rigor and impact, and discuss how these and the review methods need to be aligned with its purpose. Our paper provides guidance for conducting or evaluating review research and helps establish review research as a credible and legitimate scientific endeavor.
{"title":"Review Research as Scientific Inquiry","authors":"Sven Kunisch, D. Denyer, J. Bartunek, Markus Menz, Laura B. Cardinal","doi":"10.1177/10944281221127292","DOIUrl":"https://doi.org/10.1177/10944281221127292","url":null,"abstract":"This article and the related Feature Topic at Organizational Research Methods upcoming were motivated by the concern that despite the bourgeoning number and diversity of review articles, there was a lack of guidance on how to produce rigorous and impactful literature reviews. In this article, we introduce review research as a class of research inquiries that uses prior research as data sources to develop knowledge contributions for academia, practice and policy. We first trace the evolution of review research both outside of and within management including the articles published in this Feature Topic, and provide a holistic definition of review research. Then, we argue that in the plurality of forms of review research, the alignment of purpose and methods is crucial for high-quality review research. To accomplish this, we discuss several review purposes and criteria for assessing review research's rigor and impact, and discuss how these and the review methods need to be aligned with its purpose. Our paper provides guidance for conducting or evaluating review research and helps establish review research as a credible and legitimate scientific endeavor.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"26 1","pages":"3 - 45"},"PeriodicalIF":9.5,"publicationDate":"2022-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43364600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}