Pub Date : 2021-07-01DOI: 10.1177/10944281211011778
Lester, H. F., Cullen-Lester, K. L., & Walters, R. W. (2019). From nuisance to novel research questions: Using multilevel models to predict heterogeneous variances. Organizational Research Methods, 24(2), 342-388. From the above referenced article, which was printed in the April 2021 issue of Organizational Research Methods, the funding information has been updated, correct funding statement should read as:
莱斯特,h.f.,卡伦-莱斯特,k.l.,和沃尔特斯,r.w.(2019)。从讨厌到新颖的研究问题:使用多层模型预测异质方差。组织研究方法,24(2),342-388。上述引用文章发表于《组织研究方法》(Organizational Research Methods) 2021年4月刊,资助信息已更新,正确的资助声明应为:
{"title":"Corrigendum to From Nuisance to Novel Research Questions: Using Multilevel Models to Predict Heterogeneous Variances","authors":"","doi":"10.1177/10944281211011778","DOIUrl":"https://doi.org/10.1177/10944281211011778","url":null,"abstract":"Lester, H. F., Cullen-Lester, K. L., & Walters, R. W. (2019). From nuisance to novel research questions: Using multilevel models to predict heterogeneous variances. Organizational Research Methods, 24(2), 342-388. From the above referenced article, which was printed in the April 2021 issue of Organizational Research Methods, the funding information has been updated, correct funding statement should read as:","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"24 1","pages":"671 - 671"},"PeriodicalIF":9.5,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10944281211011778","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45259173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-28DOI: 10.1177/10944281211016534
Charlene Zhang, Martin C. Yu
Planned missingness (PM) can be implemented for survey studies to reduce study length and respondent fatigue. Based on a large sample of Big Five personality data, the present study simulates how factors including PM design (three-form and random percentage [RP]), amount of missingness, and sample size affect the ability of full-information maximum likelihood (FIML) estimation to treat missing data. Results show that although the effectiveness of FIML for treating missing data decreases as sample size decreases and amount of missing data increases, estimates only deviate substantially from truth in extreme conditions. Furthermore, the specific PM design, whether it be a three-form or RP design, makes little difference although the RP design should be easier to implement for computer-based surveys. The examination of specific boundary conditions for the application of PM as paired with FIML techniques has important implications for both the research methods literature and practitioners regularly conducting survey research
{"title":"Planned Missingness: How to and How Much?","authors":"Charlene Zhang, Martin C. Yu","doi":"10.1177/10944281211016534","DOIUrl":"https://doi.org/10.1177/10944281211016534","url":null,"abstract":"Planned missingness (PM) can be implemented for survey studies to reduce study length and respondent fatigue. Based on a large sample of Big Five personality data, the present study simulates how factors including PM design (three-form and random percentage [RP]), amount of missingness, and sample size affect the ability of full-information maximum likelihood (FIML) estimation to treat missing data. Results show that although the effectiveness of FIML for treating missing data decreases as sample size decreases and amount of missing data increases, estimates only deviate substantially from truth in extreme conditions. Furthermore, the specific PM design, whether it be a three-form or RP design, makes little difference although the RP design should be easier to implement for computer-based surveys. The examination of specific boundary conditions for the application of PM as paired with FIML techniques has important implications for both the research methods literature and practitioners regularly conducting survey research","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"623 - 641"},"PeriodicalIF":9.5,"publicationDate":"2021-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10944281211016534","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47116554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-12DOI: 10.1177/10944281211011529
Ze Zhu, Alan J. Tomassetti, R. Dalal, Shannon W. Schrader, Kevin Loo, Isaac E. Sabat, Balca Alaybek, You Zhou, Chelsea Jones, Shea Fyffe
Policy capturing is a widely used technique, but the temporal stability of policy-capturing judgments has long been a cause for concern. This article emphasizes the importance of reporting reliability, and in particular test-retest reliability, estimates in policy-capturing studies. We found that only 164 of 955 policy-capturing studies (i.e., 17.17%) reported a test-retest reliability estimate. We then conducted a reliability generalization meta-analysis on policy-capturing studies that did report test-retest reliability estimates—and we obtained an average reliability estimate of .78. We additionally examined 16 potential methodological and substantive antecedents to test-retest reliability (equivalent to moderators in validity generalization studies). We found that test-retest reliability was robust to variation in 14 of the 16 factors examined but that reliability was higher in paper-and-pencil studies than in web-based studies and was higher for behavioral intention judgments than for other (e.g., attitudinal and perceptual) judgments. We provide an agenda for future research. Finally, we provide several best-practice recommendations for researchers (and journal reviewers) with regard to (a) reporting test-retest reliability, (b) designing policy-capturing studies for appropriate reportage, and (c) properly interpreting test-retest reliability in policy-capturing studies.
{"title":"A Test-Retest Reliability Generalization Meta-Analysis of Judgments Via the Policy-Capturing Technique","authors":"Ze Zhu, Alan J. Tomassetti, R. Dalal, Shannon W. Schrader, Kevin Loo, Isaac E. Sabat, Balca Alaybek, You Zhou, Chelsea Jones, Shea Fyffe","doi":"10.1177/10944281211011529","DOIUrl":"https://doi.org/10.1177/10944281211011529","url":null,"abstract":"Policy capturing is a widely used technique, but the temporal stability of policy-capturing judgments has long been a cause for concern. This article emphasizes the importance of reporting reliability, and in particular test-retest reliability, estimates in policy-capturing studies. We found that only 164 of 955 policy-capturing studies (i.e., 17.17%) reported a test-retest reliability estimate. We then conducted a reliability generalization meta-analysis on policy-capturing studies that did report test-retest reliability estimates—and we obtained an average reliability estimate of .78. We additionally examined 16 potential methodological and substantive antecedents to test-retest reliability (equivalent to moderators in validity generalization studies). We found that test-retest reliability was robust to variation in 14 of the 16 factors examined but that reliability was higher in paper-and-pencil studies than in web-based studies and was higher for behavioral intention judgments than for other (e.g., attitudinal and perceptual) judgments. We provide an agenda for future research. Finally, we provide several best-practice recommendations for researchers (and journal reviewers) with regard to (a) reporting test-retest reliability, (b) designing policy-capturing studies for appropriate reportage, and (c) properly interpreting test-retest reliability in policy-capturing studies.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"541 - 574"},"PeriodicalIF":9.5,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10944281211011529","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44952843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-04DOI: 10.1177/10944281211008652
Zeki Simsek, B. Fox, Ciaran Heavey
In this study, we first develop a framework that presents systematicity as an encompassing orientation toward the application of explicit methods in the practice of literature reviews, informed by the principles of transparency, coverage, saturation, connectedness, universalism, and coherence. We then supplement that conceptual development with empirical insights into the reported practices of systematicity in a sample of 165 published reviews across three journals in organizational research. We finally trace implications for the future conduct of literature reviews, including the potential perils of systematicity without mindfulness.
{"title":"Systematicity in Organizational Research Literature Reviews: A Framework and Assessment","authors":"Zeki Simsek, B. Fox, Ciaran Heavey","doi":"10.1177/10944281211008652","DOIUrl":"https://doi.org/10.1177/10944281211008652","url":null,"abstract":"In this study, we first develop a framework that presents systematicity as an encompassing orientation toward the application of explicit methods in the practice of literature reviews, informed by the principles of transparency, coverage, saturation, connectedness, universalism, and coherence. We then supplement that conceptual development with empirical insights into the reported practices of systematicity in a sample of 165 published reviews across three journals in organizational research. We finally trace implications for the future conduct of literature reviews, including the potential perils of systematicity without mindfulness.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"26 1","pages":"292 - 321"},"PeriodicalIF":9.5,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10944281211008652","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45921356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-20DOI: 10.1177/10944281211005167
Andrew Parker, F. Pallotti, A. Lomi
Autologistic actor attribute models (ALAAMs) provide new analytical opportunities to advance research on how individual attitudes, cognitions, behaviors, and outcomes diffuse through networks of social relations in which individuals in organizations are embedded. ALAAMs add to available statistical models of social contagion the possibility of formulating and testing competing hypotheses about the specific mechanisms that shape patterns of adoption/diffusion. The main objective of this article is to provide an introduction and a guide to the specification, estimation, interpretation and evaluation of ALAAMs. Using original data, we demonstrate the value of ALAAMs in an analysis of academic performance and social networks in a class of graduate management students. We find evidence that both high and low performance are contagious, that is, diffuse through social contact. However, the contagion mechanisms that contribute to the diffusion of high performance and low performance differ subtly and systematically. Our results help us identify new questions that ALAAMs allow us to ask, new answers they may be able to provide, and the constraints that need to be relaxed to facilitate their more general adoption in organizational research.
{"title":"New Network Models for the Analysis of Social Contagion in Organizations: An Introduction to Autologistic Actor Attribute Models","authors":"Andrew Parker, F. Pallotti, A. Lomi","doi":"10.1177/10944281211005167","DOIUrl":"https://doi.org/10.1177/10944281211005167","url":null,"abstract":"Autologistic actor attribute models (ALAAMs) provide new analytical opportunities to advance research on how individual attitudes, cognitions, behaviors, and outcomes diffuse through networks of social relations in which individuals in organizations are embedded. ALAAMs add to available statistical models of social contagion the possibility of formulating and testing competing hypotheses about the specific mechanisms that shape patterns of adoption/diffusion. The main objective of this article is to provide an introduction and a guide to the specification, estimation, interpretation and evaluation of ALAAMs. Using original data, we demonstrate the value of ALAAMs in an analysis of academic performance and social networks in a class of graduate management students. We find evidence that both high and low performance are contagious, that is, diffuse through social contact. However, the contagion mechanisms that contribute to the diffusion of high performance and low performance differ subtly and systematically. Our results help us identify new questions that ALAAMs allow us to ask, new answers they may be able to provide, and the constraints that need to be relaxed to facilitate their more general adoption in organizational research.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"513 - 540"},"PeriodicalIF":9.5,"publicationDate":"2021-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10944281211005167","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41724651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-15DOI: 10.1177/10944281211002911
E. Quintane, M. Wood, John Dunn, L. Falzon
Extant research in organizational networks has provided critical insights into understanding the benefits of occupying a brokerage position. More recently, researchers have moved beyond the brokerage position to consider the brokering processes (arbitration and collaboration) brokers engage in and their implications for performance. However, brokering processes are typically measured using scales that reflect individuals’ orientation toward engaging in a behavior, rather than the behavior itself. In this article, we propose a measure that captures the behavioral process of brokering. The measure indicates the extent to which actors engage in arbitration versus collaboration based on sequences of time stamped relational events, such as emails, message boards, and recordings of meetings. We demonstrate the validity of our measure as well as its predictive ability. By leveraging the temporal information inherent in sequences of relational events, our behavioral measure of brokering creates opportunities for researchers to explore the dynamics of brokerage and their impact on individuals, and also paves the way for a systematic examination of the temporal dynamics of networks.
{"title":"Temporal Brokering: A Measure of Brokerage as a Behavioral Process","authors":"E. Quintane, M. Wood, John Dunn, L. Falzon","doi":"10.1177/10944281211002911","DOIUrl":"https://doi.org/10.1177/10944281211002911","url":null,"abstract":"Extant research in organizational networks has provided critical insights into understanding the benefits of occupying a brokerage position. More recently, researchers have moved beyond the brokerage position to consider the brokering processes (arbitration and collaboration) brokers engage in and their implications for performance. However, brokering processes are typically measured using scales that reflect individuals’ orientation toward engaging in a behavior, rather than the behavior itself. In this article, we propose a measure that captures the behavioral process of brokering. The measure indicates the extent to which actors engage in arbitration versus collaboration based on sequences of time stamped relational events, such as emails, message boards, and recordings of meetings. We demonstrate the validity of our measure as well as its predictive ability. By leveraging the temporal information inherent in sequences of relational events, our behavioral measure of brokering creates opportunities for researchers to explore the dynamics of brokerage and their impact on individuals, and also paves the way for a systematic examination of the temporal dynamics of networks.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"459 - 489"},"PeriodicalIF":9.5,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10944281211002911","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47591297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-15DOI: 10.1177/10944281211002904
Tianjun Sun, Bo Zhang, Mengyang Cao, F. Drasgow
With the increasing popularity of noncognitive inventories in personnel selection, organizations typically wish to be able to tell when a job applicant purposefully manufactures a favorable impression. Past faking research has primarily focused on how to reduce faking via instrument design, warnings, and statistical corrections for faking. This article took a new approach by examining the effects of faking (experimentally manipulated and contextually driven) on response processes. We modified a recently introduced item response theory tree modeling procedure, the three-process model, to identify faking in two studies. Study 1 examined self-reported vocational interest assessment responses using an induced faking experimental design. Study 2 examined self-reported personality assessment responses when some people were in a high-stakes situation (i.e., selection). Across the two studies, individuals instructed or expected to fake were found to engage in more extreme responding. By identifying the underlying differences between fakers and honest respondents, the new approach improves our understanding of faking. Percentage cutoffs based on extreme responding produced a faker classification precision of 85% on average.
{"title":"Faking Detection Improved: Adopting a Likert Item Response Process Tree Model","authors":"Tianjun Sun, Bo Zhang, Mengyang Cao, F. Drasgow","doi":"10.1177/10944281211002904","DOIUrl":"https://doi.org/10.1177/10944281211002904","url":null,"abstract":"With the increasing popularity of noncognitive inventories in personnel selection, organizations typically wish to be able to tell when a job applicant purposefully manufactures a favorable impression. Past faking research has primarily focused on how to reduce faking via instrument design, warnings, and statistical corrections for faking. This article took a new approach by examining the effects of faking (experimentally manipulated and contextually driven) on response processes. We modified a recently introduced item response theory tree modeling procedure, the three-process model, to identify faking in two studies. Study 1 examined self-reported vocational interest assessment responses using an induced faking experimental design. Study 2 examined self-reported personality assessment responses when some people were in a high-stakes situation (i.e., selection). Across the two studies, individuals instructed or expected to fake were found to engage in more extreme responding. By identifying the underlying differences between fakers and honest respondents, the new approach improves our understanding of faking. Percentage cutoffs based on extreme responding produced a faker classification precision of 85% on average.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"490 - 512"},"PeriodicalIF":9.5,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10944281211002904","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45621939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1177/10944281211002293
{"title":"Corrigendum to On Ignoring the Random Effects Assumption in Multilevel Models: Review, Critique, and Recommendations","authors":"","doi":"10.1177/10944281211002293","DOIUrl":"https://doi.org/10.1177/10944281211002293","url":null,"abstract":"","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"24 1","pages":"485 - 485"},"PeriodicalIF":9.5,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10944281211002293","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43139192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1177/1094428119857471
Janaki Gooty, G. Banks, Andrew C. Loignon, Scott Tonidandel, Courtney E. Williams
Meta-analyses are well known and widely implemented in almost every domain of research in management as well as the social, medical, and behavioral sciences. While this technique is useful for determining validity coefficients (i.e., effect sizes), meta-analyses are predicated on the assumption of independence of primary effect sizes, which might be routinely violated in the organizational sciences. Here, we discuss the implications of violating the independence assumption and demonstrate how meta-analysis could be cast as a multilevel, variance known (Vknown) model to account for such dependency in primary studies’ effect sizes. We illustrate such techniques for meta-analytic data via the HLM 7.0 software as it remains the most widely used multilevel analyses software in management. In so doing, we draw on examples in educational psychology (where such techniques were first developed), organizational sciences, and a Monte Carlo simulation (Appendix). We conclude with a discussion of implications, caveats, and future extensions. Our Appendix details features of a newly developed application that is free (based on R), user-friendly, and provides an alternative to the HLM program.
{"title":"Meta-Analyses as a Multi-Level Model","authors":"Janaki Gooty, G. Banks, Andrew C. Loignon, Scott Tonidandel, Courtney E. Williams","doi":"10.1177/1094428119857471","DOIUrl":"https://doi.org/10.1177/1094428119857471","url":null,"abstract":"Meta-analyses are well known and widely implemented in almost every domain of research in management as well as the social, medical, and behavioral sciences. While this technique is useful for determining validity coefficients (i.e., effect sizes), meta-analyses are predicated on the assumption of independence of primary effect sizes, which might be routinely violated in the organizational sciences. Here, we discuss the implications of violating the independence assumption and demonstrate how meta-analysis could be cast as a multilevel, variance known (Vknown) model to account for such dependency in primary studies’ effect sizes. We illustrate such techniques for meta-analytic data via the HLM 7.0 software as it remains the most widely used multilevel analyses software in management. In so doing, we draw on examples in educational psychology (where such techniques were first developed), organizational sciences, and a Monte Carlo simulation (Appendix). We conclude with a discussion of implications, caveats, and future extensions. Our Appendix details features of a newly developed application that is free (based on R), user-friendly, and provides an alternative to the HLM program.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"24 1","pages":"389 - 411"},"PeriodicalIF":9.5,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1094428119857471","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47635585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-19DOI: 10.1177/1094428121993228
A. Shamsollahi, M. Zyphur, Ozlem Ozkok
Cross-lagged panel models (CLPMs) are common, but their applications often focus on “short-run” effects among temporally proximal observations. This addresses questions about how dynamic systems may immediately respond to interventions, but fails to show how systems evolve over longer timeframes. We explore three types of “long-run” effects in dynamic systems that extend recent work on “impulse responses,” which reflect potential long-run effects of one-time interventions. Going beyond these, we first treat evaluations of system (in)stability by testing for “permanent effects,” which are important because in unstable systems even a one-time intervention may have enduring effects. Second, we explore classic econometric long-run effects that show how dynamic systems may respond to interventions that are sustained over time. Third, we treat “accumulated responses” to model how systems may respond to repeated interventions over time. We illustrate tests of each long-run effect in a simulated dataset and we provide all materials online including user-friendly R code that automates estimating, testing, reporting, and plotting all effects (see https://doi.org/10.26188/13506861). We conclude by emphasizing the value of aligning specific longitudinal hypotheses with quantitative methods.
{"title":"Long-Run Effects in Dynamic Systems: New Tools for Cross-Lagged Panel Models","authors":"A. Shamsollahi, M. Zyphur, Ozlem Ozkok","doi":"10.1177/1094428121993228","DOIUrl":"https://doi.org/10.1177/1094428121993228","url":null,"abstract":"Cross-lagged panel models (CLPMs) are common, but their applications often focus on “short-run” effects among temporally proximal observations. This addresses questions about how dynamic systems may immediately respond to interventions, but fails to show how systems evolve over longer timeframes. We explore three types of “long-run” effects in dynamic systems that extend recent work on “impulse responses,” which reflect potential long-run effects of one-time interventions. Going beyond these, we first treat evaluations of system (in)stability by testing for “permanent effects,” which are important because in unstable systems even a one-time intervention may have enduring effects. Second, we explore classic econometric long-run effects that show how dynamic systems may respond to interventions that are sustained over time. Third, we treat “accumulated responses” to model how systems may respond to repeated interventions over time. We illustrate tests of each long-run effect in a simulated dataset and we provide all materials online including user-friendly R code that automates estimating, testing, reporting, and plotting all effects (see https://doi.org/10.26188/13506861). We conclude by emphasizing the value of aligning specific longitudinal hypotheses with quantitative methods.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"435 - 458"},"PeriodicalIF":9.5,"publicationDate":"2021-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1094428121993228","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45394357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}