Pub Date : 2021-05-04DOI: 10.1177/10944281211008652
Zeki Simsek, B. Fox, Ciaran Heavey
In this study, we first develop a framework that presents systematicity as an encompassing orientation toward the application of explicit methods in the practice of literature reviews, informed by the principles of transparency, coverage, saturation, connectedness, universalism, and coherence. We then supplement that conceptual development with empirical insights into the reported practices of systematicity in a sample of 165 published reviews across three journals in organizational research. We finally trace implications for the future conduct of literature reviews, including the potential perils of systematicity without mindfulness.
{"title":"Systematicity in Organizational Research Literature Reviews: A Framework and Assessment","authors":"Zeki Simsek, B. Fox, Ciaran Heavey","doi":"10.1177/10944281211008652","DOIUrl":"https://doi.org/10.1177/10944281211008652","url":null,"abstract":"In this study, we first develop a framework that presents systematicity as an encompassing orientation toward the application of explicit methods in the practice of literature reviews, informed by the principles of transparency, coverage, saturation, connectedness, universalism, and coherence. We then supplement that conceptual development with empirical insights into the reported practices of systematicity in a sample of 165 published reviews across three journals in organizational research. We finally trace implications for the future conduct of literature reviews, including the potential perils of systematicity without mindfulness.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"26 1","pages":"292 - 321"},"PeriodicalIF":9.5,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10944281211008652","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45921356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-20DOI: 10.1177/10944281211005167
Andrew Parker, F. Pallotti, A. Lomi
Autologistic actor attribute models (ALAAMs) provide new analytical opportunities to advance research on how individual attitudes, cognitions, behaviors, and outcomes diffuse through networks of social relations in which individuals in organizations are embedded. ALAAMs add to available statistical models of social contagion the possibility of formulating and testing competing hypotheses about the specific mechanisms that shape patterns of adoption/diffusion. The main objective of this article is to provide an introduction and a guide to the specification, estimation, interpretation and evaluation of ALAAMs. Using original data, we demonstrate the value of ALAAMs in an analysis of academic performance and social networks in a class of graduate management students. We find evidence that both high and low performance are contagious, that is, diffuse through social contact. However, the contagion mechanisms that contribute to the diffusion of high performance and low performance differ subtly and systematically. Our results help us identify new questions that ALAAMs allow us to ask, new answers they may be able to provide, and the constraints that need to be relaxed to facilitate their more general adoption in organizational research.
{"title":"New Network Models for the Analysis of Social Contagion in Organizations: An Introduction to Autologistic Actor Attribute Models","authors":"Andrew Parker, F. Pallotti, A. Lomi","doi":"10.1177/10944281211005167","DOIUrl":"https://doi.org/10.1177/10944281211005167","url":null,"abstract":"Autologistic actor attribute models (ALAAMs) provide new analytical opportunities to advance research on how individual attitudes, cognitions, behaviors, and outcomes diffuse through networks of social relations in which individuals in organizations are embedded. ALAAMs add to available statistical models of social contagion the possibility of formulating and testing competing hypotheses about the specific mechanisms that shape patterns of adoption/diffusion. The main objective of this article is to provide an introduction and a guide to the specification, estimation, interpretation and evaluation of ALAAMs. Using original data, we demonstrate the value of ALAAMs in an analysis of academic performance and social networks in a class of graduate management students. We find evidence that both high and low performance are contagious, that is, diffuse through social contact. However, the contagion mechanisms that contribute to the diffusion of high performance and low performance differ subtly and systematically. Our results help us identify new questions that ALAAMs allow us to ask, new answers they may be able to provide, and the constraints that need to be relaxed to facilitate their more general adoption in organizational research.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"513 - 540"},"PeriodicalIF":9.5,"publicationDate":"2021-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10944281211005167","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41724651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-15DOI: 10.1177/10944281211002911
E. Quintane, M. Wood, John Dunn, L. Falzon
Extant research in organizational networks has provided critical insights into understanding the benefits of occupying a brokerage position. More recently, researchers have moved beyond the brokerage position to consider the brokering processes (arbitration and collaboration) brokers engage in and their implications for performance. However, brokering processes are typically measured using scales that reflect individuals’ orientation toward engaging in a behavior, rather than the behavior itself. In this article, we propose a measure that captures the behavioral process of brokering. The measure indicates the extent to which actors engage in arbitration versus collaboration based on sequences of time stamped relational events, such as emails, message boards, and recordings of meetings. We demonstrate the validity of our measure as well as its predictive ability. By leveraging the temporal information inherent in sequences of relational events, our behavioral measure of brokering creates opportunities for researchers to explore the dynamics of brokerage and their impact on individuals, and also paves the way for a systematic examination of the temporal dynamics of networks.
{"title":"Temporal Brokering: A Measure of Brokerage as a Behavioral Process","authors":"E. Quintane, M. Wood, John Dunn, L. Falzon","doi":"10.1177/10944281211002911","DOIUrl":"https://doi.org/10.1177/10944281211002911","url":null,"abstract":"Extant research in organizational networks has provided critical insights into understanding the benefits of occupying a brokerage position. More recently, researchers have moved beyond the brokerage position to consider the brokering processes (arbitration and collaboration) brokers engage in and their implications for performance. However, brokering processes are typically measured using scales that reflect individuals’ orientation toward engaging in a behavior, rather than the behavior itself. In this article, we propose a measure that captures the behavioral process of brokering. The measure indicates the extent to which actors engage in arbitration versus collaboration based on sequences of time stamped relational events, such as emails, message boards, and recordings of meetings. We demonstrate the validity of our measure as well as its predictive ability. By leveraging the temporal information inherent in sequences of relational events, our behavioral measure of brokering creates opportunities for researchers to explore the dynamics of brokerage and their impact on individuals, and also paves the way for a systematic examination of the temporal dynamics of networks.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"459 - 489"},"PeriodicalIF":9.5,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10944281211002911","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47591297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-15DOI: 10.1177/10944281211002904
Tianjun Sun, Bo Zhang, Mengyang Cao, F. Drasgow
With the increasing popularity of noncognitive inventories in personnel selection, organizations typically wish to be able to tell when a job applicant purposefully manufactures a favorable impression. Past faking research has primarily focused on how to reduce faking via instrument design, warnings, and statistical corrections for faking. This article took a new approach by examining the effects of faking (experimentally manipulated and contextually driven) on response processes. We modified a recently introduced item response theory tree modeling procedure, the three-process model, to identify faking in two studies. Study 1 examined self-reported vocational interest assessment responses using an induced faking experimental design. Study 2 examined self-reported personality assessment responses when some people were in a high-stakes situation (i.e., selection). Across the two studies, individuals instructed or expected to fake were found to engage in more extreme responding. By identifying the underlying differences between fakers and honest respondents, the new approach improves our understanding of faking. Percentage cutoffs based on extreme responding produced a faker classification precision of 85% on average.
{"title":"Faking Detection Improved: Adopting a Likert Item Response Process Tree Model","authors":"Tianjun Sun, Bo Zhang, Mengyang Cao, F. Drasgow","doi":"10.1177/10944281211002904","DOIUrl":"https://doi.org/10.1177/10944281211002904","url":null,"abstract":"With the increasing popularity of noncognitive inventories in personnel selection, organizations typically wish to be able to tell when a job applicant purposefully manufactures a favorable impression. Past faking research has primarily focused on how to reduce faking via instrument design, warnings, and statistical corrections for faking. This article took a new approach by examining the effects of faking (experimentally manipulated and contextually driven) on response processes. We modified a recently introduced item response theory tree modeling procedure, the three-process model, to identify faking in two studies. Study 1 examined self-reported vocational interest assessment responses using an induced faking experimental design. Study 2 examined self-reported personality assessment responses when some people were in a high-stakes situation (i.e., selection). Across the two studies, individuals instructed or expected to fake were found to engage in more extreme responding. By identifying the underlying differences between fakers and honest respondents, the new approach improves our understanding of faking. Percentage cutoffs based on extreme responding produced a faker classification precision of 85% on average.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"490 - 512"},"PeriodicalIF":9.5,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10944281211002904","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45621939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1177/10944281211002293
{"title":"Corrigendum to On Ignoring the Random Effects Assumption in Multilevel Models: Review, Critique, and Recommendations","authors":"","doi":"10.1177/10944281211002293","DOIUrl":"https://doi.org/10.1177/10944281211002293","url":null,"abstract":"","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"24 1","pages":"485 - 485"},"PeriodicalIF":9.5,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10944281211002293","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43139192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1177/1094428119857471
Janaki Gooty, G. Banks, Andrew C. Loignon, Scott Tonidandel, Courtney E. Williams
Meta-analyses are well known and widely implemented in almost every domain of research in management as well as the social, medical, and behavioral sciences. While this technique is useful for determining validity coefficients (i.e., effect sizes), meta-analyses are predicated on the assumption of independence of primary effect sizes, which might be routinely violated in the organizational sciences. Here, we discuss the implications of violating the independence assumption and demonstrate how meta-analysis could be cast as a multilevel, variance known (Vknown) model to account for such dependency in primary studies’ effect sizes. We illustrate such techniques for meta-analytic data via the HLM 7.0 software as it remains the most widely used multilevel analyses software in management. In so doing, we draw on examples in educational psychology (where such techniques were first developed), organizational sciences, and a Monte Carlo simulation (Appendix). We conclude with a discussion of implications, caveats, and future extensions. Our Appendix details features of a newly developed application that is free (based on R), user-friendly, and provides an alternative to the HLM program.
{"title":"Meta-Analyses as a Multi-Level Model","authors":"Janaki Gooty, G. Banks, Andrew C. Loignon, Scott Tonidandel, Courtney E. Williams","doi":"10.1177/1094428119857471","DOIUrl":"https://doi.org/10.1177/1094428119857471","url":null,"abstract":"Meta-analyses are well known and widely implemented in almost every domain of research in management as well as the social, medical, and behavioral sciences. While this technique is useful for determining validity coefficients (i.e., effect sizes), meta-analyses are predicated on the assumption of independence of primary effect sizes, which might be routinely violated in the organizational sciences. Here, we discuss the implications of violating the independence assumption and demonstrate how meta-analysis could be cast as a multilevel, variance known (Vknown) model to account for such dependency in primary studies’ effect sizes. We illustrate such techniques for meta-analytic data via the HLM 7.0 software as it remains the most widely used multilevel analyses software in management. In so doing, we draw on examples in educational psychology (where such techniques were first developed), organizational sciences, and a Monte Carlo simulation (Appendix). We conclude with a discussion of implications, caveats, and future extensions. Our Appendix details features of a newly developed application that is free (based on R), user-friendly, and provides an alternative to the HLM program.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"24 1","pages":"389 - 411"},"PeriodicalIF":9.5,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1094428119857471","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47635585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-19DOI: 10.1177/1094428121993228
A. Shamsollahi, M. Zyphur, Ozlem Ozkok
Cross-lagged panel models (CLPMs) are common, but their applications often focus on “short-run” effects among temporally proximal observations. This addresses questions about how dynamic systems may immediately respond to interventions, but fails to show how systems evolve over longer timeframes. We explore three types of “long-run” effects in dynamic systems that extend recent work on “impulse responses,” which reflect potential long-run effects of one-time interventions. Going beyond these, we first treat evaluations of system (in)stability by testing for “permanent effects,” which are important because in unstable systems even a one-time intervention may have enduring effects. Second, we explore classic econometric long-run effects that show how dynamic systems may respond to interventions that are sustained over time. Third, we treat “accumulated responses” to model how systems may respond to repeated interventions over time. We illustrate tests of each long-run effect in a simulated dataset and we provide all materials online including user-friendly R code that automates estimating, testing, reporting, and plotting all effects (see https://doi.org/10.26188/13506861). We conclude by emphasizing the value of aligning specific longitudinal hypotheses with quantitative methods.
{"title":"Long-Run Effects in Dynamic Systems: New Tools for Cross-Lagged Panel Models","authors":"A. Shamsollahi, M. Zyphur, Ozlem Ozkok","doi":"10.1177/1094428121993228","DOIUrl":"https://doi.org/10.1177/1094428121993228","url":null,"abstract":"Cross-lagged panel models (CLPMs) are common, but their applications often focus on “short-run” effects among temporally proximal observations. This addresses questions about how dynamic systems may immediately respond to interventions, but fails to show how systems evolve over longer timeframes. We explore three types of “long-run” effects in dynamic systems that extend recent work on “impulse responses,” which reflect potential long-run effects of one-time interventions. Going beyond these, we first treat evaluations of system (in)stability by testing for “permanent effects,” which are important because in unstable systems even a one-time intervention may have enduring effects. Second, we explore classic econometric long-run effects that show how dynamic systems may respond to interventions that are sustained over time. Third, we treat “accumulated responses” to model how systems may respond to repeated interventions over time. We illustrate tests of each long-run effect in a simulated dataset and we provide all materials online including user-friendly R code that automates estimating, testing, reporting, and plotting all effects (see https://doi.org/10.26188/13506861). We conclude by emphasizing the value of aligning specific longitudinal hypotheses with quantitative methods.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"435 - 458"},"PeriodicalIF":9.5,"publicationDate":"2021-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1094428121993228","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45394357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-11DOI: 10.1177/1094428121991907
Mikko Rönkkö, E. Aalto, H. Tenhunen, Miguel I. Aguirre-Urreta
Transforming variables before analysis or applying a transformation as a part of a generalized linear model are common practices in organizational research. Several methodological articles addressing the topic, either directly or indirectly, have been published in the recent past. In this article, we point out a few misconceptions about transformations and propose a set of eight simple guidelines for addressing them. Our main argument is that transformations should not be chosen based on the nature or distribution of the individual variables but based on the functional form of the relationship between two or more variables that is expected from theory or discovered empirically. Building on a systematic review of six leading management journals, we point to several ways the specification and interpretation of nonlinear models can be improved.
{"title":"Eight Simple Guidelines for Improved Understanding of Transformations and Nonlinear Effects","authors":"Mikko Rönkkö, E. Aalto, H. Tenhunen, Miguel I. Aguirre-Urreta","doi":"10.1177/1094428121991907","DOIUrl":"https://doi.org/10.1177/1094428121991907","url":null,"abstract":"Transforming variables before analysis or applying a transformation as a part of a generalized linear model are common practices in organizational research. Several methodological articles addressing the topic, either directly or indirectly, have been published in the recent past. In this article, we point out a few misconceptions about transformations and propose a set of eight simple guidelines for addressing them. Our main argument is that transformations should not be chosen based on the nature or distribution of the individual variables but based on the functional form of the relationship between two or more variables that is expected from theory or discovered empirically. Building on a systematic review of six leading management journals, we point to several ways the specification and interpretation of nonlinear models can be improved.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"48 - 87"},"PeriodicalIF":9.5,"publicationDate":"2021-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1094428121991907","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42879922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-11DOI: 10.1177/1094428121999086
Yin Lin
Forced-choice (FC) assessments of noncognitive psychological constructs (e.g., personality, behavioral tendencies) are popular in high-stakes organizational testing scenarios (e.g., informing hiring decisions) due to their enhanced resistance against response distortions (e.g., faking good, impression management). The measurement precisions of FC assessment scores used to inform personnel decisions are of paramount importance in practice. Different types of reliability estimates are reported for FC assessment scores in current publications, while consensus on best practices appears to be lacking. In order to provide understanding and structure around the reporting of FC reliability, this study systematically examined different types of reliability estimation methods for Thurstonian IRT-based FC assessment scores: their theoretical differences were discussed, and their numerical differences were illustrated through a series of simulations and empirical studies. In doing so, this study provides a practical guide for appraising different reliability estimation methods for IRT-based FC assessment scores.
{"title":"Reliability Estimates for IRT-Based Forced-Choice Assessment Scores","authors":"Yin Lin","doi":"10.1177/1094428121999086","DOIUrl":"https://doi.org/10.1177/1094428121999086","url":null,"abstract":"Forced-choice (FC) assessments of noncognitive psychological constructs (e.g., personality, behavioral tendencies) are popular in high-stakes organizational testing scenarios (e.g., informing hiring decisions) due to their enhanced resistance against response distortions (e.g., faking good, impression management). The measurement precisions of FC assessment scores used to inform personnel decisions are of paramount importance in practice. Different types of reliability estimates are reported for FC assessment scores in current publications, while consensus on best practices appears to be lacking. In order to provide understanding and structure around the reporting of FC reliability, this study systematically examined different types of reliability estimation methods for Thurstonian IRT-based FC assessment scores: their theoretical differences were discussed, and their numerical differences were illustrated through a series of simulations and empirical studies. In doing so, this study provides a practical guide for appraising different reliability estimation methods for IRT-based FC assessment scores.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"575 - 590"},"PeriodicalIF":9.5,"publicationDate":"2021-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1094428121999086","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47986107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-09DOI: 10.1177/1094428121991230
David Antons, Christoph F. Breidbach, Amol M. Joshi, T. Salge
The substantial volume, continued growth, and resulting complexity of the scientific literature not only increases the need for systematic, replicable, and rigorous literature reviews, but also highlights the natural limits of human researchers’ information processing capabilities. In search of a solution to this dilemma, computational techniques are beginning to support human researchers in synthesizing large bodies of literature. However, actionable methodological guidance on how to design, conduct, and document such computationally augmented literature reviews is lacking to date. We respond by introducing and defining computational literature reviews (CLRs) as a new review method and put forward a six-step roadmap, covering the CLR process from identifying the review objectives to selecting algorithms and reporting findings. We make the CLR method accessible to novice and expert users alike by identifying critical design decisions and typical challenges for each step and provide practical guidelines for tailoring the CLR method to four conceptual review goals. As such, we present CLRs as a literature review method where the choice, design, and implementation of a CLR are guided by specific review objectives, methodological capabilities, and resource constraints of the human researcher.
{"title":"Computational Literature Reviews: Method, Algorithms, and Roadmap","authors":"David Antons, Christoph F. Breidbach, Amol M. Joshi, T. Salge","doi":"10.1177/1094428121991230","DOIUrl":"https://doi.org/10.1177/1094428121991230","url":null,"abstract":"The substantial volume, continued growth, and resulting complexity of the scientific literature not only increases the need for systematic, replicable, and rigorous literature reviews, but also highlights the natural limits of human researchers’ information processing capabilities. In search of a solution to this dilemma, computational techniques are beginning to support human researchers in synthesizing large bodies of literature. However, actionable methodological guidance on how to design, conduct, and document such computationally augmented literature reviews is lacking to date. We respond by introducing and defining computational literature reviews (CLRs) as a new review method and put forward a six-step roadmap, covering the CLR process from identifying the review objectives to selecting algorithms and reporting findings. We make the CLR method accessible to novice and expert users alike by identifying critical design decisions and typical challenges for each step and provide practical guidelines for tailoring the CLR method to four conceptual review goals. As such, we present CLRs as a literature review method where the choice, design, and implementation of a CLR are guided by specific review objectives, methodological capabilities, and resource constraints of the human researcher.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"26 1","pages":"107 - 138"},"PeriodicalIF":9.5,"publicationDate":"2021-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1094428121991230","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48441588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}