Pub Date : 2022-01-06DOI: 10.1177/10944281211065111
J. Cortina, C. Dormann, Hannah M. Markell, Sheila K. Keener
Models that combine moderation and mediation are increasingly common. One such model is that in which one variable causes another variable that, in turn, moderates the relationship between two other variables. There are many recent examples of these Endogenous Moderator Models (EMMs). They bear little superficial resemblance to second-stage moderation models, and they are almost never conceptualized and tested as such. We use path analytic equations to show that this is precisely what EMMs are. Specifically, we use these path analytic equations and a review of recent EMMs in order to show that these models are seldom conceptualized or tested properly and to understand the best ways to handle such models. We then use Monte Carlo simulation to show the consequences of testing these models as they are typically tested rather than as second-stage moderation models. We end with recommendations and provide example datasets and code for SPSS and R.
{"title":"Endogenous Moderator Models: What They are, What They Aren’t, and Why it Matters","authors":"J. Cortina, C. Dormann, Hannah M. Markell, Sheila K. Keener","doi":"10.1177/10944281211065111","DOIUrl":"https://doi.org/10.1177/10944281211065111","url":null,"abstract":"Models that combine moderation and mediation are increasingly common. One such model is that in which one variable causes another variable that, in turn, moderates the relationship between two other variables. There are many recent examples of these Endogenous Moderator Models (EMMs). They bear little superficial resemblance to second-stage moderation models, and they are almost never conceptualized and tested as such. We use path analytic equations to show that this is precisely what EMMs are. Specifically, we use these path analytic equations and a review of recent EMMs in order to show that these models are seldom conceptualized or tested properly and to understand the best ways to handle such models. We then use Monte Carlo simulation to show the consequences of testing these models as they are typically tested rather than as second-stage moderation models. We end with recommendations and provide example datasets and code for SPSS and R.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"26 1","pages":"499 - 523"},"PeriodicalIF":9.5,"publicationDate":"2022-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46761191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-05DOI: 10.1177/10944281211060712
Vicente González-Romá, Ana Hernández
Multilevel methods allow researchers to investigate relationships that expand across levels (e.g., individuals, teams, and organizations). The popularity of these methods for studying organizational phenomena has increased in recent decades. Methodologists have examined how these methods work under different conditions, providing an empirical base for making sound decisions when using these methods. In this article, we provide recommendations, tools, resources, and a checklist that can be useful for scholars involved in conducting or assessing multilevel studies. The focus of our article is on two-level designs, in which Level-1 entities are neatly nested within Level-2 entities, and top-down effects are estimated. However, some of our recommendations are also applicable to more complex multilevel designs.
{"title":"Conducting and Evaluating Multilevel Studies: Recommendations, Resources, and a Checklist","authors":"Vicente González-Romá, Ana Hernández","doi":"10.1177/10944281211060712","DOIUrl":"https://doi.org/10.1177/10944281211060712","url":null,"abstract":"Multilevel methods allow researchers to investigate relationships that expand across levels (e.g., individuals, teams, and organizations). The popularity of these methods for studying organizational phenomena has increased in recent decades. Methodologists have examined how these methods work under different conditions, providing an empirical base for making sound decisions when using these methods. In this article, we provide recommendations, tools, resources, and a checklist that can be useful for scholars involved in conducting or assessing multilevel studies. The focus of our article is on two-level designs, in which Level-1 entities are neatly nested within Level-2 entities, and top-down effects are estimated. However, some of our recommendations are also applicable to more complex multilevel designs.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":" ","pages":""},"PeriodicalIF":9.5,"publicationDate":"2022-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42544757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-17DOI: 10.1177/10944281211056524
Ajay V. Somaraju, Christopher D. Nye, J. Olenick
The study of measurement equivalence has important implications for organizational research. Nonequivalence across groups or over time can affect the results of a study and the conclusions that are drawn from it. As a result, the review paper by Vandenberg & Lance (2000) has been highly cited and has played an important role in understanding the measurement of organizational constructs. However, that paper is now 20 years old, and a number of advances have been made in the application and interpretation of measurement equivalence (ME) since its publication. Therefore, the goal of the present paper is to provide an updated review of ME techniques that describes recent advances in testing for ME and proposes a taxonomy of potential sources of nonequivalence. Finally, we articulate recommendations for applying these newer methods and consider future directions for measurement equivalence research in the organizational literature.
{"title":"A Review of Measurement Equivalence in Organizational Research: What's Old, What's New, What's Next?","authors":"Ajay V. Somaraju, Christopher D. Nye, J. Olenick","doi":"10.1177/10944281211056524","DOIUrl":"https://doi.org/10.1177/10944281211056524","url":null,"abstract":"The study of measurement equivalence has important implications for organizational research. Nonequivalence across groups or over time can affect the results of a study and the conclusions that are drawn from it. As a result, the review paper by Vandenberg & Lance (2000) has been highly cited and has played an important role in understanding the measurement of organizational constructs. However, that paper is now 20 years old, and a number of advances have been made in the application and interpretation of measurement equivalence (ME) since its publication. Therefore, the goal of the present paper is to provide an updated review of ME techniques that describes recent advances in testing for ME and proposes a taxonomy of potential sources of nonequivalence. Finally, we articulate recommendations for applying these newer methods and consider future directions for measurement equivalence research in the organizational literature.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"741 - 785"},"PeriodicalIF":9.5,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43395326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-06DOI: 10.1177/10944281211056520
N. Bowling, Jason L. Huang, Cheyna K. Brower, Caleb B. Bragg
Several recent studies have examined the prevention, causes, and consequences of insufficient effort responding (IER) to surveys. Scientific progress in this area, however, rests on the availability of construct-valid IER measures. In the current paper we describe the potential merits of the page time index, which is computed by counting the number of questionnaire pages to which a participant has responded more quickly than two seconds per item (see Huang et al., 2012). We conducted three studies (total N = 1,056) to examine the page time index's construct validity. Across these studies, we found that page time converged highly with other IER indices, that it was sensitive to an experimental manipulation warning participants to respond carefully, and that it predicted the extent to which participants were unable to recognize item content. We also found that page time's validity was superior to that of total completion time and that the two-seconds-per-item rule yielded a construct-valid page time score for items of various word lengths. Given its apparent validity, we provide practical recommendations for the use of the page time index.
{"title":"The Quick and the Careless: The Construct Validity of Page Time as a Measure of Insufficient Effort Responding to Surveys","authors":"N. Bowling, Jason L. Huang, Cheyna K. Brower, Caleb B. Bragg","doi":"10.1177/10944281211056520","DOIUrl":"https://doi.org/10.1177/10944281211056520","url":null,"abstract":"Several recent studies have examined the prevention, causes, and consequences of insufficient effort responding (IER) to surveys. Scientific progress in this area, however, rests on the availability of construct-valid IER measures. In the current paper we describe the potential merits of the page time index, which is computed by counting the number of questionnaire pages to which a participant has responded more quickly than two seconds per item (see Huang et al., 2012). We conducted three studies (total N = 1,056) to examine the page time index's construct validity. Across these studies, we found that page time converged highly with other IER indices, that it was sensitive to an experimental manipulation warning participants to respond carefully, and that it predicted the extent to which participants were unable to recognize item content. We also found that page time's validity was superior to that of total completion time and that the two-seconds-per-item rule yielded a construct-valid page time score for items of various word lengths. Given its apparent validity, we provide practical recommendations for the use of the page time index.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"26 1","pages":"323 - 352"},"PeriodicalIF":9.5,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41503816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-04DOI: 10.1177/10944281211067746
{"title":"Erratum to Interaction Effects in Cross-Lagged Panel Models: SEM with Latent Interactions Applied to Work-Family Conflict, Job Satisfaction, and Gender","authors":"","doi":"10.1177/10944281211067746","DOIUrl":"https://doi.org/10.1177/10944281211067746","url":null,"abstract":"","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"815 - 816"},"PeriodicalIF":9.5,"publicationDate":"2021-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48479731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-03DOI: 10.1177/10944281211060706
Liana M. Kreamer, Betsy H. Albritton, Scott Tonidandel, S. Rogelberg
This study explores how researchers in the organizational sciences use and/or cite methodological ‘best practice’ (BP) articles. Namely, are scholars adhering fully to the prescribed practices they cite, or are they cherry picking from recommended practices without disclosing? Or worse yet, are scholars inaccurately following the methodological best practices they cite? To answer these questions, we selected three seminal and highly cited best practice articles published in Organizational Research Methods (ORM) within the past ten years. These articles offer clear and specific methodological recommendations for researchers as they make decisions regarding the design, measurement, and interpretation of empirical studies. We then gathered all articles that have cited these best practice pieces. Using comprehensive coding forms, we evaluated how authors are using and citing best practice articles (e.g., if they are appropriately following the recommended practices). Our results revealed substantial variation in how authors cited best practice articles, with 17.4% appropriately citing, 47.7% citing with minor inaccuracies, and 34.5% inappropriately citing BP articles. These findings shed light on the use (and misuse) of methodological recommendations, offering insight into how we can better improve our digestion and implementation of best practices as we design and test research and theory. Key implications and recommendations for editors, reviewers, and authors are discussed.
{"title":"The Use and Misuse of Organizational Research Methods ‘Best Practice’ Articles","authors":"Liana M. Kreamer, Betsy H. Albritton, Scott Tonidandel, S. Rogelberg","doi":"10.1177/10944281211060706","DOIUrl":"https://doi.org/10.1177/10944281211060706","url":null,"abstract":"This study explores how researchers in the organizational sciences use and/or cite methodological ‘best practice’ (BP) articles. Namely, are scholars adhering fully to the prescribed practices they cite, or are they cherry picking from recommended practices without disclosing? Or worse yet, are scholars inaccurately following the methodological best practices they cite? To answer these questions, we selected three seminal and highly cited best practice articles published in Organizational Research Methods (ORM) within the past ten years. These articles offer clear and specific methodological recommendations for researchers as they make decisions regarding the design, measurement, and interpretation of empirical studies. We then gathered all articles that have cited these best practice pieces. Using comprehensive coding forms, we evaluated how authors are using and citing best practice articles (e.g., if they are appropriately following the recommended practices). Our results revealed substantial variation in how authors cited best practice articles, with 17.4% appropriately citing, 47.7% citing with minor inaccuracies, and 34.5% inappropriately citing BP articles. These findings shed light on the use (and misuse) of methodological recommendations, offering insight into how we can better improve our digestion and implementation of best practices as we design and test research and theory. Key implications and recommendations for editors, reviewers, and authors are discussed.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"26 1","pages":"387 - 408"},"PeriodicalIF":9.5,"publicationDate":"2021-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44669351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-03DOI: 10.1177/10944281211058466
Peren Arin, M. Minniti, S. Murtinu, Nicola Spagnolo
Inflection points, kinks, and jumps identify places where the relationship between dependent and independent variables switches in some important way. Although these switch points are often mentioned in management research, their presence in the data is either ignored, or postulated ad hoc by testing arbitrarily specified functional forms (e.g., U or inverted U-shaped relationships). This is problematic if we want accurate tests for our theories. To address this issue, we provide an integrative framework for the identification of nonlinearities. Our approach constitutes a precursor step that researchers will want to conduct before deciding which estimation model may be most appropriate. We also provide instructions on how our approach can be implemented, and a replicable illustration of the procedure. Our illustrative example shows how the identification of endogenous switch points may lead to significantly different conclusions compared to those obtained when switch points are ignored or their existence is conjectured arbitrarily. This supports our claim that capturing empirically the presence of nonlinearity is important and should be included in our empirical investigations.
{"title":"Inflection Points, Kinks, and Jumps: A Statistical Approach to Detecting Nonlinearities","authors":"Peren Arin, M. Minniti, S. Murtinu, Nicola Spagnolo","doi":"10.1177/10944281211058466","DOIUrl":"https://doi.org/10.1177/10944281211058466","url":null,"abstract":"Inflection points, kinks, and jumps identify places where the relationship between dependent and independent variables switches in some important way. Although these switch points are often mentioned in management research, their presence in the data is either ignored, or postulated ad hoc by testing arbitrarily specified functional forms (e.g., U or inverted U-shaped relationships). This is problematic if we want accurate tests for our theories. To address this issue, we provide an integrative framework for the identification of nonlinearities. Our approach constitutes a precursor step that researchers will want to conduct before deciding which estimation model may be most appropriate. We also provide instructions on how our approach can be implemented, and a replicable illustration of the procedure. Our illustrative example shows how the identification of endogenous switch points may lead to significantly different conclusions compared to those obtained when switch points are ignored or their existence is conjectured arbitrarily. This supports our claim that capturing empirically the presence of nonlinearity is important and should be included in our empirical investigations.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"786 - 814"},"PeriodicalIF":9.5,"publicationDate":"2021-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45552385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-02DOI: 10.1177/10944281211046312
Jessica Villiger, Simone A. Schweiger, Artur Baldauf
This article contributes to the practice of coding in meta-analyses by offering direction and advice for experienced and novice meta-analysts on the “how” of coding. The coding process, the invisible architecture of any meta-analysis, has received comparably little attention in methodological resources, leaving the research community with insufficient guidance on “how” it should be rigorously planned (i.e., cohere with the research objective), conducted (i.e., make reliable and valid coding decisions), and reported (i.e., in a sufficiently transparent manner for readers to comprehend the authors’ decision-making). A lack of rigor in these areas can lead to erroneous results, which is problematic for entire research communities who build their future knowledge upon meta-analyses. Along four steps, the guidelines presented here elucidate “how” the coding process can be performed in a coherent, efficient, and credible manner that enables connectivity with future research, thereby enhancing the reliability and validity of meta-analytic findings. Our recommendations also support editors and reviewers in advising authors on how to improve the rigor of their coding and ultimately establish higher quality standards in meta-analytic research.
{"title":"Making the Invisible Visible: Guidelines for the Coding Process in Meta-Analyses","authors":"Jessica Villiger, Simone A. Schweiger, Artur Baldauf","doi":"10.1177/10944281211046312","DOIUrl":"https://doi.org/10.1177/10944281211046312","url":null,"abstract":"This article contributes to the practice of coding in meta-analyses by offering direction and advice for experienced and novice meta-analysts on the “how” of coding. The coding process, the invisible architecture of any meta-analysis, has received comparably little attention in methodological resources, leaving the research community with insufficient guidance on “how” it should be rigorously planned (i.e., cohere with the research objective), conducted (i.e., make reliable and valid coding decisions), and reported (i.e., in a sufficiently transparent manner for readers to comprehend the authors’ decision-making). A lack of rigor in these areas can lead to erroneous results, which is problematic for entire research communities who build their future knowledge upon meta-analyses. Along four steps, the guidelines presented here elucidate “how” the coding process can be performed in a coherent, efficient, and credible manner that enables connectivity with future research, thereby enhancing the reliability and validity of meta-analytic findings. Our recommendations also support editors and reviewers in advising authors on how to improve the rigor of their coding and ultimately establish higher quality standards in meta-analytic research.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"716 - 740"},"PeriodicalIF":9.5,"publicationDate":"2021-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47471672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-29DOI: 10.1177/10944281211043733
Ozlem Ozkok, Manuel J Vaulont, M. Zyphur, Zhen Zhang, Kristopher J Preacher, Peter Koval, Yixia Zheng
Researchers often combine longitudinal panel data analysis with tests of interactions (i.e., moderation). A popular example is the cross-lagged panel model (CLPM). However, interaction tests in CLPMs and related models require caution because stable (i.e., between-level, B) and dynamic (i.e., within-level, W) sources of variation are present in longitudinal data, which can conflate estimates of interaction effects. We address this by integrating literature on CLPMs, multilevel moderation, and latent interactions. Distinguishing stable B and dynamic W parts, we describe three types of interactions that are of interest to researchers: 1) purely dynamic or WxW; 2) cross-level or BxW; and 3) purely stable or BxB. We demonstrate estimating latent interaction effects in a CLPM using a Bayesian SEM in Mplus to apply relationships among work-family conflict and job satisfaction, using gender as a stable B variable. We support our approach via simulations, demonstrating that our proposed CLPM approach is superior to a traditional CLPMs that conflate B and W sources of variation. We describe higher-order nonlinearities as a possible extension, and we discuss limitations and future research directions.
{"title":"Interaction Effects in Cross-Lagged Panel Models: SEM with Latent Interactions Applied to Work-Family Conflict, Job Satisfaction, and Gender","authors":"Ozlem Ozkok, Manuel J Vaulont, M. Zyphur, Zhen Zhang, Kristopher J Preacher, Peter Koval, Yixia Zheng","doi":"10.1177/10944281211043733","DOIUrl":"https://doi.org/10.1177/10944281211043733","url":null,"abstract":"Researchers often combine longitudinal panel data analysis with tests of interactions (i.e., moderation). A popular example is the cross-lagged panel model (CLPM). However, interaction tests in CLPMs and related models require caution because stable (i.e., between-level, B) and dynamic (i.e., within-level, W) sources of variation are present in longitudinal data, which can conflate estimates of interaction effects. We address this by integrating literature on CLPMs, multilevel moderation, and latent interactions. Distinguishing stable B and dynamic W parts, we describe three types of interactions that are of interest to researchers: 1) purely dynamic or WxW; 2) cross-level or BxW; and 3) purely stable or BxB. We demonstrate estimating latent interaction effects in a CLPM using a Bayesian SEM in Mplus to apply relationships among work-family conflict and job satisfaction, using gender as a stable B variable. We support our approach via simulations, demonstrating that our proposed CLPM approach is superior to a traditional CLPMs that conflate B and W sources of variation. We describe higher-order nonlinearities as a possible extension, and we discuss limitations and future research directions.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"673 - 715"},"PeriodicalIF":9.5,"publicationDate":"2021-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48057256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-13DOI: 10.1177/10944281211058903
T. Köhler, L. Lambert
We are honored to be the next co-Editors of ORM. Under the previous editorial teams, led by Larry Williams, Herman Aguinis, Bob Vandenberg, José Cortina, James LeBreton, and Paul Bliese, ORM has been succeeding by every available metric. ORM is widely recognized as the premier outlet for methodological scholarship in the organizational sciences, and this success is due to the collaboration between past Editors, Editorial teams, and Sage. It is not possible to overstate the contributions of the past Editors, and we are excited to take over leadership of this well-established journal. We especially want to credit Paul Bliese for making the handover process an incredibly smooth one. He promised we can reach out to him anytime. Thank you, Paul. We have your phone number on speed dial. Going forward, we are going to implement a few changes to ORM’s editorship structure and increase ORM’s visibility and reach in different research communities. In this editorial, we want to provide a small preview of what we have planned.
{"title":"Inaugural Editorial","authors":"T. Köhler, L. Lambert","doi":"10.1177/10944281211058903","DOIUrl":"https://doi.org/10.1177/10944281211058903","url":null,"abstract":"We are honored to be the next co-Editors of ORM. Under the previous editorial teams, led by Larry Williams, Herman Aguinis, Bob Vandenberg, José Cortina, James LeBreton, and Paul Bliese, ORM has been succeeding by every available metric. ORM is widely recognized as the premier outlet for methodological scholarship in the organizational sciences, and this success is due to the collaboration between past Editors, Editorial teams, and Sage. It is not possible to overstate the contributions of the past Editors, and we are excited to take over leadership of this well-established journal. We especially want to credit Paul Bliese for making the handover process an incredibly smooth one. He promised we can reach out to him anytime. Thank you, Paul. We have your phone number on speed dial. Going forward, we are going to implement a few changes to ORM’s editorship structure and increase ORM’s visibility and reach in different research communities. In this editorial, we want to provide a small preview of what we have planned.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"3 - 5"},"PeriodicalIF":9.5,"publicationDate":"2021-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46409058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}