Pub Date : 2021-10-18DOI: 10.1177/10944281211042388
Kyle J. Emich, Li Lu, Amanda J. Ferguson, R. Peterson, Michael McCourt
Research methods for studying team composition tend to employ either a variable-centered or person-centered approach. The variable-centered approach allows scholars to consider how patterns of attributes between team members influence teams, while the person-centered approach allows scholars to consider how variation in multiple attributes within team members influences subgroup formation and its effects. Team composition theory, however, is becoming increasingly sophisticated, assuming variation on multiple attributes both within and between team members—for example, in predicting how a team functions differently when its most assertive members are also optimistic rather than pessimistic. To support this new theory, we propose an attribute alignment approach, which complements the variable-centered and person-centered approaches by modeling teams as matrices of their members and their members’ attributes. We first demonstrate how to calculate attribute alignment by determining the vector norm and vector angle between team members’ attributes. Then, we demonstrate how the alignment of team member personality attributes (neuroticism and agreeableness) affects team relationship conflict. Finally, we discuss the potential of using the attribute alignment approach to enrich broader team research.
{"title":"Team Composition Revisited: A Team Member Attribute Alignment Approach","authors":"Kyle J. Emich, Li Lu, Amanda J. Ferguson, R. Peterson, Michael McCourt","doi":"10.1177/10944281211042388","DOIUrl":"https://doi.org/10.1177/10944281211042388","url":null,"abstract":"Research methods for studying team composition tend to employ either a variable-centered or person-centered approach. The variable-centered approach allows scholars to consider how patterns of attributes between team members influence teams, while the person-centered approach allows scholars to consider how variation in multiple attributes within team members influences subgroup formation and its effects. Team composition theory, however, is becoming increasingly sophisticated, assuming variation on multiple attributes both within and between team members—for example, in predicting how a team functions differently when its most assertive members are also optimistic rather than pessimistic. To support this new theory, we propose an attribute alignment approach, which complements the variable-centered and person-centered approaches by modeling teams as matrices of their members and their members’ attributes. We first demonstrate how to calculate attribute alignment by determining the vector norm and vector angle between team members’ attributes. Then, we demonstrate how the alignment of team member personality attributes (neuroticism and agreeableness) affects team relationship conflict. Finally, we discuss the potential of using the attribute alignment approach to enrich broader team research.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"642 - 672"},"PeriodicalIF":9.5,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42555071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-18DOI: 10.1177/10944281211050609
Seang-Hwane Joo, Philseok Lee, Jung Yeon Park, Stephen E. Stark
Although the use of ideal point item response theory (IRT) models for organizational research has increased over the last decade, the assessment of construct dimensionality of ideal point scales has been overlooked in previous research. In this study, we developed and evaluated dimensionality assessment methods for an ideal point IRT model under the Bayesian framework. We applied the posterior predictive model checking (PPMC) approach to the most widely used ideal point IRT model, the generalized graded unfolding model (GGUM). We conducted a Monte Carlo simulation to compare the performance of item pair discrepancy statistics and to evaluate the Type I error and power rates of the methods. The simulation results indicated that the Bayesian dimensionality detection method controlled Type I errors reasonably well across the conditions. In addition, the proposed method showed better performance than existing methods, yielding acceptable power when 20% of the items were generated from the secondary dimension. Organizational implications and limitations of the study are further discussed.
{"title":"Assessing Dimensionality of the Ideal Point Item Response Theory Model Using Posterior Predictive Model Checking","authors":"Seang-Hwane Joo, Philseok Lee, Jung Yeon Park, Stephen E. Stark","doi":"10.1177/10944281211050609","DOIUrl":"https://doi.org/10.1177/10944281211050609","url":null,"abstract":"Although the use of ideal point item response theory (IRT) models for organizational research has increased over the last decade, the assessment of construct dimensionality of ideal point scales has been overlooked in previous research. In this study, we developed and evaluated dimensionality assessment methods for an ideal point IRT model under the Bayesian framework. We applied the posterior predictive model checking (PPMC) approach to the most widely used ideal point IRT model, the generalized graded unfolding model (GGUM). We conducted a Monte Carlo simulation to compare the performance of item pair discrepancy statistics and to evaluate the Type I error and power rates of the methods. The simulation results indicated that the Bayesian dimensionality detection method controlled Type I errors reasonably well across the conditions. In addition, the proposed method showed better performance than existing methods, yielding acceptable power when 20% of the items were generated from the secondary dimension. Organizational implications and limitations of the study are further discussed.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"26 1","pages":"353 - 382"},"PeriodicalIF":9.5,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45044899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.1177/10944281211030648
L. J. Williams, G. Banks, R. Vandenberg
Providing developmental peer reviewers is one of the most critical services performed by researchers in the organizational sciences (Bedeian, 2003). Yet, completing helpful and constructive reviews is not easy (Epstein, 1995; Feldman, 2005). This challenge may be due, in part, to the fact that our field provides only limited formal reviewer training in graduate programs and through professional development workshops (PDWs). Much of what new reviewers learn happens through informal training with mentors (Carpenter, 2009). Without effective training, reviewers may be prone to biases in their methodological evaluations of manuscripts (Banks et al., 2016; Bedeian, Taylor, & Miller, 2010; Emerson et al., 2010) or may simply lack the expertise needed to evaluate manuscripts due to the large variety of content areas and methodological techniques being employed in research. Many editorials have been written to provide guidance for basic reviewer development (e.g., Lee, 1995). Recently, the Society for Industrial and Organizational Psychology (SIOP) and the Consortium for the Advancement of Research Methods and Analysis (CARMA) started an initiative around basic reviewer development (http://carmarmep.org/siop-carma-reviewer-series/). This ongoing training serves to introduce basic reviewer competencies (Koehler et al., 2020), recommend readings, and training videos that are freely available to help new and even experienced reviewers improve the quality of their reviews. While basic reviewer development is laudable, there is also a need for more formal training on advanced methodological topics. Hence, Organizational Research Methods along with CARMA are now introducing a new Virtual Feature Topic targeted at advanced reviewer development.
{"title":"ORM-CARMA Virtual Feature Topics for Advanced Reviewer Development","authors":"L. J. Williams, G. Banks, R. Vandenberg","doi":"10.1177/10944281211030648","DOIUrl":"https://doi.org/10.1177/10944281211030648","url":null,"abstract":"Providing developmental peer reviewers is one of the most critical services performed by researchers in the organizational sciences (Bedeian, 2003). Yet, completing helpful and constructive reviews is not easy (Epstein, 1995; Feldman, 2005). This challenge may be due, in part, to the fact that our field provides only limited formal reviewer training in graduate programs and through professional development workshops (PDWs). Much of what new reviewers learn happens through informal training with mentors (Carpenter, 2009). Without effective training, reviewers may be prone to biases in their methodological evaluations of manuscripts (Banks et al., 2016; Bedeian, Taylor, & Miller, 2010; Emerson et al., 2010) or may simply lack the expertise needed to evaluate manuscripts due to the large variety of content areas and methodological techniques being employed in research. Many editorials have been written to provide guidance for basic reviewer development (e.g., Lee, 1995). Recently, the Society for Industrial and Organizational Psychology (SIOP) and the Consortium for the Advancement of Research Methods and Analysis (CARMA) started an initiative around basic reviewer development (http://carmarmep.org/siop-carma-reviewer-series/). This ongoing training serves to introduce basic reviewer competencies (Koehler et al., 2020), recommend readings, and training videos that are freely available to help new and even experienced reviewers improve the quality of their reviews. While basic reviewer development is laudable, there is also a need for more formal training on advanced methodological topics. Hence, Organizational Research Methods along with CARMA are now introducing a new Virtual Feature Topic targeted at advanced reviewer development.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"24 1","pages":"675 - 677"},"PeriodicalIF":9.5,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49065474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.1177/1094428120963830
Aaron Schecter, E. Quintane
The relational event model (REM) solves a problem for organizational researchers who have access to sequences of time-stamped interactions. It enables them to estimate statistical models without collapsing the data into cross-sectional panels, which removes timing and sequence information. However, there is little guidance in the extant literature regarding issues that may affect REM’s power, precision, and accuracy: How many events or actors are needed? How large should the risk set be? How should statistics be scaled? To gain insights into these issues, we conduct a series of experiments using simulated sequences of relational events under different conditions and using different sampling and scaling strategies. We also provide an empirical example using email communications in a real-life context. Our results indicate that, in most cases, the power and precision levels of REMs are good, making it a strong explanatory model. However, REM suffers from issues of accuracy that can be severe in certain cases, making it a poor predictive model. We provide a set of practical recommendations to guide researchers’ use of REMs in organizational research.
{"title":"The Power, Accuracy, and Precision of the Relational Event Model","authors":"Aaron Schecter, E. Quintane","doi":"10.1177/1094428120963830","DOIUrl":"https://doi.org/10.1177/1094428120963830","url":null,"abstract":"The relational event model (REM) solves a problem for organizational researchers who have access to sequences of time-stamped interactions. It enables them to estimate statistical models without collapsing the data into cross-sectional panels, which removes timing and sequence information. However, there is little guidance in the extant literature regarding issues that may affect REM’s power, precision, and accuracy: How many events or actors are needed? How large should the risk set be? How should statistics be scaled? To gain insights into these issues, we conduct a series of experiments using simulated sequences of relational events under different conditions and using different sampling and scaling strategies. We also provide an empirical example using email communications in a real-life context. Our results indicate that, in most cases, the power and precision levels of REMs are good, making it a strong explanatory model. However, REM suffers from issues of accuracy that can be severe in certain cases, making it a poor predictive model. We provide a set of practical recommendations to guide researchers’ use of REMs in organizational research.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"24 1","pages":"802 - 829"},"PeriodicalIF":9.5,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1094428120963830","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46952812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.1177/1094428120967089
J. DeSimone, M. Brannick, Ernest H. O’Boyle, J. Ryu
This article encourages transparency in the reporting of meta-analytic procedures. Specifically, we highlight aspects of meta-analytic search, coding, data presentation, and data analysis where published meta-analyses often fall short in presenting sufficient information to allow replication. We identify opportunities where reviewers can request additional information or analyses that will enhance transparent reporting practices and facilitate the evaluation of quality in meta-analytic reporting. We focus on concerns specific to (or prevalent in) meta-analyses conducted in organizational research. In doing so, we reference a number of existing and emerging techniques, highlighting their contribution to meta-analysis while emphasizing key information reviewers may request. Our focus is primarily on meta-analyses, but secondary uses of meta-analytic data are also considered. We conclude by providing a checklist for reviewers in an effort to facilitate the review process as it pertains to the goals of transparency and replicability.
{"title":"Recommendations for Reviewing Meta-Analyses in Organizational Research","authors":"J. DeSimone, M. Brannick, Ernest H. O’Boyle, J. Ryu","doi":"10.1177/1094428120967089","DOIUrl":"https://doi.org/10.1177/1094428120967089","url":null,"abstract":"This article encourages transparency in the reporting of meta-analytic procedures. Specifically, we highlight aspects of meta-analytic search, coding, data presentation, and data analysis where published meta-analyses often fall short in presenting sufficient information to allow replication. We identify opportunities where reviewers can request additional information or analyses that will enhance transparent reporting practices and facilitate the evaluation of quality in meta-analytic reporting. We focus on concerns specific to (or prevalent in) meta-analyses conducted in organizational research. In doing so, we reference a number of existing and emerging techniques, highlighting their contribution to meta-analysis while emphasizing key information reviewers may request. Our focus is primarily on meta-analyses, but secondary uses of meta-analytic data are also considered. We conclude by providing a checklist for reviewers in an effort to facilitate the review process as it pertains to the goals of transparency and replicability.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"24 1","pages":"694 - 717"},"PeriodicalIF":9.5,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1094428120967089","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41902304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1177/1094428120915516
Danni Wang, D. Waldman, Pierre A. Balthazard, Maja Stikic, Nicola M. Pless, Thomas Maak, C. Berka, Travis Richardson
In this article, we describe how neuroscience can be used in the study of team dynamics. Specifically, we point out methodological limitations in current team-based research and explain how quantitative electroencephalogram technology can be applied to the study of emergent processes in teams. In so doing, we describe how this technology and related analyses can explain emergent processes in teams through an example of the neural assessment of attention of team members who are engaged in a problem-solving task. Specifically, we demonstrate how the real-time, continuous neural signatures of team members’ attention in a problem-solving context emerges in teams over time. We then consider how further development of this technology might advance our understanding of the emergence of other team-based constructs and research questions.
{"title":"Applying Neuroscience to Emergent Processes in Teams","authors":"Danni Wang, D. Waldman, Pierre A. Balthazard, Maja Stikic, Nicola M. Pless, Thomas Maak, C. Berka, Travis Richardson","doi":"10.1177/1094428120915516","DOIUrl":"https://doi.org/10.1177/1094428120915516","url":null,"abstract":"In this article, we describe how neuroscience can be used in the study of team dynamics. Specifically, we point out methodological limitations in current team-based research and explain how quantitative electroencephalogram technology can be applied to the study of emergent processes in teams. In so doing, we describe how this technology and related analyses can explain emergent processes in teams through an example of the neural assessment of attention of team members who are engaged in a problem-solving task. Specifically, we demonstrate how the real-time, continuous neural signatures of team members’ attention in a problem-solving context emerges in teams over time. We then consider how further development of this technology might advance our understanding of the emergence of other team-based constructs and research questions.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"24 1","pages":"595 - 615"},"PeriodicalIF":9.5,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1094428120915516","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47461126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1177/1094428120930815
Andrew B. Speer
Performance appraisal narratives are qualitative descriptions of employee job performance. This data source has seen increased research attention due to the ability to efficiently derive insights using natural language processing (NLP). The current study details the development of NLP scoring for performance dimensions from narrative text and then investigates validity and generalizability evidence for those scores. Specifically, narrative valence scores were created to measure a priori performance dimensions. These scores were derived using bag of words and word embedding features and then modeled using modern prediction algorithms. Construct validity evidence was investigated across three samples, revealing that the scores converged with independent human ratings of the text, aligned numerical performance ratings made during the appraisal, and demonstrated some degree of discriminant validity. However, construct validity evidence differed based on which NLP algorithm was used to derive scores. In addition, valence scores generalized to both downward and upward rating contexts. Finally, the performance valence algorithms generalized better in contexts where the same qualitative survey design was used compared with contexts where different instructions were given to elicit narrative text.
{"title":"Scoring Dimension-Level Job Performance From Narrative Comments: Validity and Generalizability When Using Natural Language Processing","authors":"Andrew B. Speer","doi":"10.1177/1094428120930815","DOIUrl":"https://doi.org/10.1177/1094428120930815","url":null,"abstract":"Performance appraisal narratives are qualitative descriptions of employee job performance. This data source has seen increased research attention due to the ability to efficiently derive insights using natural language processing (NLP). The current study details the development of NLP scoring for performance dimensions from narrative text and then investigates validity and generalizability evidence for those scores. Specifically, narrative valence scores were created to measure a priori performance dimensions. These scores were derived using bag of words and word embedding features and then modeled using modern prediction algorithms. Construct validity evidence was investigated across three samples, revealing that the scores converged with independent human ratings of the text, aligned numerical performance ratings made during the appraisal, and demonstrated some degree of discriminant validity. However, construct validity evidence differed based on which NLP algorithm was used to derive scores. In addition, valence scores generalized to both downward and upward rating contexts. Finally, the performance valence algorithms generalized better in contexts where the same qualitative survey design was used compared with contexts where different instructions were given to elicit narrative text.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"24 1","pages":"572 - 594"},"PeriodicalIF":9.5,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1094428120930815","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42014110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1177/10944281211011778
Lester, H. F., Cullen-Lester, K. L., & Walters, R. W. (2019). From nuisance to novel research questions: Using multilevel models to predict heterogeneous variances. Organizational Research Methods, 24(2), 342-388. From the above referenced article, which was printed in the April 2021 issue of Organizational Research Methods, the funding information has been updated, correct funding statement should read as:
莱斯特,h.f.,卡伦-莱斯特,k.l.,和沃尔特斯,r.w.(2019)。从讨厌到新颖的研究问题:使用多层模型预测异质方差。组织研究方法,24(2),342-388。上述引用文章发表于《组织研究方法》(Organizational Research Methods) 2021年4月刊,资助信息已更新,正确的资助声明应为:
{"title":"Corrigendum to From Nuisance to Novel Research Questions: Using Multilevel Models to Predict Heterogeneous Variances","authors":"","doi":"10.1177/10944281211011778","DOIUrl":"https://doi.org/10.1177/10944281211011778","url":null,"abstract":"Lester, H. F., Cullen-Lester, K. L., & Walters, R. W. (2019). From nuisance to novel research questions: Using multilevel models to predict heterogeneous variances. Organizational Research Methods, 24(2), 342-388. From the above referenced article, which was printed in the April 2021 issue of Organizational Research Methods, the funding information has been updated, correct funding statement should read as:","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"24 1","pages":"671 - 671"},"PeriodicalIF":9.5,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10944281211011778","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45259173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-28DOI: 10.1177/10944281211016534
Charlene Zhang, Martin C. Yu
Planned missingness (PM) can be implemented for survey studies to reduce study length and respondent fatigue. Based on a large sample of Big Five personality data, the present study simulates how factors including PM design (three-form and random percentage [RP]), amount of missingness, and sample size affect the ability of full-information maximum likelihood (FIML) estimation to treat missing data. Results show that although the effectiveness of FIML for treating missing data decreases as sample size decreases and amount of missing data increases, estimates only deviate substantially from truth in extreme conditions. Furthermore, the specific PM design, whether it be a three-form or RP design, makes little difference although the RP design should be easier to implement for computer-based surveys. The examination of specific boundary conditions for the application of PM as paired with FIML techniques has important implications for both the research methods literature and practitioners regularly conducting survey research
{"title":"Planned Missingness: How to and How Much?","authors":"Charlene Zhang, Martin C. Yu","doi":"10.1177/10944281211016534","DOIUrl":"https://doi.org/10.1177/10944281211016534","url":null,"abstract":"Planned missingness (PM) can be implemented for survey studies to reduce study length and respondent fatigue. Based on a large sample of Big Five personality data, the present study simulates how factors including PM design (three-form and random percentage [RP]), amount of missingness, and sample size affect the ability of full-information maximum likelihood (FIML) estimation to treat missing data. Results show that although the effectiveness of FIML for treating missing data decreases as sample size decreases and amount of missing data increases, estimates only deviate substantially from truth in extreme conditions. Furthermore, the specific PM design, whether it be a three-form or RP design, makes little difference although the RP design should be easier to implement for computer-based surveys. The examination of specific boundary conditions for the application of PM as paired with FIML techniques has important implications for both the research methods literature and practitioners regularly conducting survey research","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"623 - 641"},"PeriodicalIF":9.5,"publicationDate":"2021-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10944281211016534","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47116554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-12DOI: 10.1177/10944281211011529
Ze Zhu, Alan J. Tomassetti, R. Dalal, Shannon W. Schrader, Kevin Loo, Isaac E. Sabat, Balca Alaybek, You Zhou, Chelsea Jones, Shea Fyffe
Policy capturing is a widely used technique, but the temporal stability of policy-capturing judgments has long been a cause for concern. This article emphasizes the importance of reporting reliability, and in particular test-retest reliability, estimates in policy-capturing studies. We found that only 164 of 955 policy-capturing studies (i.e., 17.17%) reported a test-retest reliability estimate. We then conducted a reliability generalization meta-analysis on policy-capturing studies that did report test-retest reliability estimates—and we obtained an average reliability estimate of .78. We additionally examined 16 potential methodological and substantive antecedents to test-retest reliability (equivalent to moderators in validity generalization studies). We found that test-retest reliability was robust to variation in 14 of the 16 factors examined but that reliability was higher in paper-and-pencil studies than in web-based studies and was higher for behavioral intention judgments than for other (e.g., attitudinal and perceptual) judgments. We provide an agenda for future research. Finally, we provide several best-practice recommendations for researchers (and journal reviewers) with regard to (a) reporting test-retest reliability, (b) designing policy-capturing studies for appropriate reportage, and (c) properly interpreting test-retest reliability in policy-capturing studies.
{"title":"A Test-Retest Reliability Generalization Meta-Analysis of Judgments Via the Policy-Capturing Technique","authors":"Ze Zhu, Alan J. Tomassetti, R. Dalal, Shannon W. Schrader, Kevin Loo, Isaac E. Sabat, Balca Alaybek, You Zhou, Chelsea Jones, Shea Fyffe","doi":"10.1177/10944281211011529","DOIUrl":"https://doi.org/10.1177/10944281211011529","url":null,"abstract":"Policy capturing is a widely used technique, but the temporal stability of policy-capturing judgments has long been a cause for concern. This article emphasizes the importance of reporting reliability, and in particular test-retest reliability, estimates in policy-capturing studies. We found that only 164 of 955 policy-capturing studies (i.e., 17.17%) reported a test-retest reliability estimate. We then conducted a reliability generalization meta-analysis on policy-capturing studies that did report test-retest reliability estimates—and we obtained an average reliability estimate of .78. We additionally examined 16 potential methodological and substantive antecedents to test-retest reliability (equivalent to moderators in validity generalization studies). We found that test-retest reliability was robust to variation in 14 of the 16 factors examined but that reliability was higher in paper-and-pencil studies than in web-based studies and was higher for behavioral intention judgments than for other (e.g., attitudinal and perceptual) judgments. We provide an agenda for future research. Finally, we provide several best-practice recommendations for researchers (and journal reviewers) with regard to (a) reporting test-retest reliability, (b) designing policy-capturing studies for appropriate reportage, and (c) properly interpreting test-retest reliability in policy-capturing studies.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"541 - 574"},"PeriodicalIF":9.5,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10944281211011529","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44952843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}