Pub Date : 2024-02-14DOI: 10.1177/10944281231223127
Mark C. Ramsey, N. Bowling
Employers commonly use cognitive ability tests in the personnel selection process. Although ability tests are excellent predictors of job performance, their validity may be compromised when test takers engage in careless responding. It is thus important for researchers to have access to effective careless responding measures, which allow researchers to screen for careless responding and to evaluate efforts to prevent careless responding. Previous research has primarily used two types of measures to assess careless responding to ability tests—response time and self-reported carelessness. In the current paper, we expand the careless responding assessment toolbox by examining the construct validity of four additional measures: (1) infrequency, (2) instructed-response, (3) long-string, and (4) intra-individual response variability (IRV) indices. Expanding the available set of careless responding indices is important because the strengths of new indices may offset the weaknesses of existing indices and would allow researchers to better assess heterogeneous careless response behaviors. Across three datasets ( N = 1,193), we found strong support for the validity of the response-time and infrequency indices, and moderate support for the validity of the instructed-response and IRV indices.
{"title":"Building a Bigger Toolbox: The Construct Validity of Existing and Proposed Measures of Careless Responding to Cognitive Ability Tests","authors":"Mark C. Ramsey, N. Bowling","doi":"10.1177/10944281231223127","DOIUrl":"https://doi.org/10.1177/10944281231223127","url":null,"abstract":"Employers commonly use cognitive ability tests in the personnel selection process. Although ability tests are excellent predictors of job performance, their validity may be compromised when test takers engage in careless responding. It is thus important for researchers to have access to effective careless responding measures, which allow researchers to screen for careless responding and to evaluate efforts to prevent careless responding. Previous research has primarily used two types of measures to assess careless responding to ability tests—response time and self-reported carelessness. In the current paper, we expand the careless responding assessment toolbox by examining the construct validity of four additional measures: (1) infrequency, (2) instructed-response, (3) long-string, and (4) intra-individual response variability (IRV) indices. Expanding the available set of careless responding indices is important because the strengths of new indices may offset the weaknesses of existing indices and would allow researchers to better assess heterogeneous careless response behaviors. Across three datasets ( N = 1,193), we found strong support for the validity of the response-time and infrequency indices, and moderate support for the validity of the instructed-response and IRV indices.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"161 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139839392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-14DOI: 10.1177/10944281241229784
Mengtong Li, Bo Zhang, Lingyue Li, Tianjun Sun, Anna Brown
Forced-choice (FC) measures are becoming increasingly popular as an alternative to single-statement (SS) measures. However, to ensure the practical usefulness of an FC measure, it is crucial to address the tension between psychometric properties and faking resistance by balancing mixed keying and social desirability matching. It is currently unknown from an empirical perspective whether the two design criteria can be reconciled, and how they impact respondent reactions. By conducting a two-wave experimental design, we constructed four FC measures with varying degrees of mixed-keying and social desirability matching from the same statement pool and investigated their differences in terms of psychometric properties, faking resistance, and respondent reactions. Results showed that all FC measures demonstrated comparable reliability and induced similar respondent reactions. Forced-choice measures with stricter social desirability matching were more faking resistant, while FC measures with more mixed keyed blocks had higher convergent validity with the SS measure and displayed similar discriminant and criterion-related validity profiles as the SS benchmark. More importantly, we found that it is possible to strike a balance between social desirability matching and mixed keying, such that FC measures can have adequate psychometric properties and faking resistance. A seven-step recommendation and a tutorial based on the autoFC R package were provided to help readers construct their own FC measures.
强迫选择(FC)测量法作为单一陈述(SS)测量法的替代方法正变得越来越流行。然而,为了确保强迫选择测量的实用性,必须通过平衡混合抠像和社会期望匹配来解决心理测量特性和伪造阻抗之间的矛盾。从实证的角度来看,目前还不清楚这两个设计标准是否可以调和,以及它们如何影响受访者的反应。通过采用两波实验设计,我们从同一个语句库中构建了四种具有不同程度混合抠像和社会宜忌匹配的功能分类测量,并研究了它们在心理测量特性、防伪性和受访者反应方面的差异。结果表明,所有功能认知测验都表现出了相似的可靠性,并引起了类似的受访者反应。具有更严格社会可取性匹配的强迫选择测量具有更强的抗伪造性,而具有更多混合键块的强迫选择测量与社会可取性测量具有更高的收敛效度,并显示出与社会可取性基准相似的判别效度和标准效度。更重要的是,我们发现有可能在社会可取性匹配和混合抠像之间取得平衡,从而使 FC 测量具有足够的心理测量学特性和抗伪造性。我们还提供了七步建议和基于 autoFC R 软件包的教程,以帮助读者构建自己的 FC 测量。
{"title":"Mixed-Keying or Desirability-Matching in the Construction of Forced-Choice Measures? An Empirical Investigation and Practical Recommendations","authors":"Mengtong Li, Bo Zhang, Lingyue Li, Tianjun Sun, Anna Brown","doi":"10.1177/10944281241229784","DOIUrl":"https://doi.org/10.1177/10944281241229784","url":null,"abstract":"Forced-choice (FC) measures are becoming increasingly popular as an alternative to single-statement (SS) measures. However, to ensure the practical usefulness of an FC measure, it is crucial to address the tension between psychometric properties and faking resistance by balancing mixed keying and social desirability matching. It is currently unknown from an empirical perspective whether the two design criteria can be reconciled, and how they impact respondent reactions. By conducting a two-wave experimental design, we constructed four FC measures with varying degrees of mixed-keying and social desirability matching from the same statement pool and investigated their differences in terms of psychometric properties, faking resistance, and respondent reactions. Results showed that all FC measures demonstrated comparable reliability and induced similar respondent reactions. Forced-choice measures with stricter social desirability matching were more faking resistant, while FC measures with more mixed keyed blocks had higher convergent validity with the SS measure and displayed similar discriminant and criterion-related validity profiles as the SS benchmark. More importantly, we found that it is possible to strike a balance between social desirability matching and mixed keying, such that FC measures can have adequate psychometric properties and faking resistance. A seven-step recommendation and a tutorial based on the autoFC R package were provided to help readers construct their own FC measures.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"43 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139779085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-14DOI: 10.1177/10944281241229784
Mengtong Li, Bo Zhang, Lingyue Li, Tianjun Sun, Anna Brown
Forced-choice (FC) measures are becoming increasingly popular as an alternative to single-statement (SS) measures. However, to ensure the practical usefulness of an FC measure, it is crucial to address the tension between psychometric properties and faking resistance by balancing mixed keying and social desirability matching. It is currently unknown from an empirical perspective whether the two design criteria can be reconciled, and how they impact respondent reactions. By conducting a two-wave experimental design, we constructed four FC measures with varying degrees of mixed-keying and social desirability matching from the same statement pool and investigated their differences in terms of psychometric properties, faking resistance, and respondent reactions. Results showed that all FC measures demonstrated comparable reliability and induced similar respondent reactions. Forced-choice measures with stricter social desirability matching were more faking resistant, while FC measures with more mixed keyed blocks had higher convergent validity with the SS measure and displayed similar discriminant and criterion-related validity profiles as the SS benchmark. More importantly, we found that it is possible to strike a balance between social desirability matching and mixed keying, such that FC measures can have adequate psychometric properties and faking resistance. A seven-step recommendation and a tutorial based on the autoFC R package were provided to help readers construct their own FC measures.
强迫选择(FC)测量法作为单一陈述(SS)测量法的替代方法正变得越来越流行。然而,为了确保强迫选择测量的实用性,必须通过平衡混合抠像和社会期望匹配来解决心理测量特性和伪造阻抗之间的矛盾。从实证的角度来看,目前还不清楚这两个设计标准是否可以调和,以及它们如何影响受访者的反应。通过采用两波实验设计,我们从同一个语句库中构建了四种具有不同程度混合抠像和社会宜忌匹配的功能分类测量,并研究了它们在心理测量特性、防伪性和受访者反应方面的差异。结果表明,所有功能认知测验都表现出了相似的可靠性,并引起了类似的受访者反应。具有更严格社会可取性匹配的强迫选择测量具有更强的抗伪造性,而具有更多混合键块的强迫选择测量与社会可取性测量具有更高的收敛效度,并显示出与社会可取性基准相似的判别效度和标准效度。更重要的是,我们发现有可能在社会可取性匹配和混合抠像之间取得平衡,从而使 FC 测量具有足够的心理测量学特性和抗伪造性。我们还提供了七步建议和基于 autoFC R 软件包的教程,以帮助读者构建自己的 FC 测量。
{"title":"Mixed-Keying or Desirability-Matching in the Construction of Forced-Choice Measures? An Empirical Investigation and Practical Recommendations","authors":"Mengtong Li, Bo Zhang, Lingyue Li, Tianjun Sun, Anna Brown","doi":"10.1177/10944281241229784","DOIUrl":"https://doi.org/10.1177/10944281241229784","url":null,"abstract":"Forced-choice (FC) measures are becoming increasingly popular as an alternative to single-statement (SS) measures. However, to ensure the practical usefulness of an FC measure, it is crucial to address the tension between psychometric properties and faking resistance by balancing mixed keying and social desirability matching. It is currently unknown from an empirical perspective whether the two design criteria can be reconciled, and how they impact respondent reactions. By conducting a two-wave experimental design, we constructed four FC measures with varying degrees of mixed-keying and social desirability matching from the same statement pool and investigated their differences in terms of psychometric properties, faking resistance, and respondent reactions. Results showed that all FC measures demonstrated comparable reliability and induced similar respondent reactions. Forced-choice measures with stricter social desirability matching were more faking resistant, while FC measures with more mixed keyed blocks had higher convergent validity with the SS measure and displayed similar discriminant and criterion-related validity profiles as the SS benchmark. More importantly, we found that it is possible to strike a balance between social desirability matching and mixed keying, such that FC measures can have adequate psychometric properties and faking resistance. A seven-step recommendation and a tutorial based on the autoFC R package were provided to help readers construct their own FC measures.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"572 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139839111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-14DOI: 10.1177/10944281231223127
Mark C. Ramsey, N. Bowling
Employers commonly use cognitive ability tests in the personnel selection process. Although ability tests are excellent predictors of job performance, their validity may be compromised when test takers engage in careless responding. It is thus important for researchers to have access to effective careless responding measures, which allow researchers to screen for careless responding and to evaluate efforts to prevent careless responding. Previous research has primarily used two types of measures to assess careless responding to ability tests—response time and self-reported carelessness. In the current paper, we expand the careless responding assessment toolbox by examining the construct validity of four additional measures: (1) infrequency, (2) instructed-response, (3) long-string, and (4) intra-individual response variability (IRV) indices. Expanding the available set of careless responding indices is important because the strengths of new indices may offset the weaknesses of existing indices and would allow researchers to better assess heterogeneous careless response behaviors. Across three datasets ( N = 1,193), we found strong support for the validity of the response-time and infrequency indices, and moderate support for the validity of the instructed-response and IRV indices.
{"title":"Building a Bigger Toolbox: The Construct Validity of Existing and Proposed Measures of Careless Responding to Cognitive Ability Tests","authors":"Mark C. Ramsey, N. Bowling","doi":"10.1177/10944281231223127","DOIUrl":"https://doi.org/10.1177/10944281231223127","url":null,"abstract":"Employers commonly use cognitive ability tests in the personnel selection process. Although ability tests are excellent predictors of job performance, their validity may be compromised when test takers engage in careless responding. It is thus important for researchers to have access to effective careless responding measures, which allow researchers to screen for careless responding and to evaluate efforts to prevent careless responding. Previous research has primarily used two types of measures to assess careless responding to ability tests—response time and self-reported carelessness. In the current paper, we expand the careless responding assessment toolbox by examining the construct validity of four additional measures: (1) infrequency, (2) instructed-response, (3) long-string, and (4) intra-individual response variability (IRV) indices. Expanding the available set of careless responding indices is important because the strengths of new indices may offset the weaknesses of existing indices and would allow researchers to better assess heterogeneous careless response behaviors. Across three datasets ( N = 1,193), we found strong support for the validity of the response-time and infrequency indices, and moderate support for the validity of the instructed-response and IRV indices.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"25 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139779517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-23DOI: 10.1177/10944281231212570
Jason L. Huang, N. Bowling, Benjamin D. McLarty, Donald H. Kluemper, Zhonghao Wang
Insufficient effort responding (IER) to surveys, which occurs when participants provide responses in a haphazard, careless, or random fashion, has been identified as a threat to data quality in survey research because it can inflate observed relationships between self-reported measures. Building on this discovery, we propose two mechanisms that lead to IER exerting an unexpected confounding effect between self-reported and informant-rated measures. First, IER can contaminate self-report measures when the means of attentive and inattentive responses differ. Second, IER may share variance with some informant-rated measures, particularly supervisor ratings of participants’ job performance. These two mechanisms operating in tandem would suggest that IER can act as a “third variable” that inflates observed relationships between self-reported predictor scores and informant-rated criteria. We tested this possibility using a multisource dataset ( N = 398) that included incumbent self-reports of five-factor model personality traits and supervisor-ratings of three job performance dimensions—task performance, organizational citizenship behavior (OCB), and counterproductive work behavior (CWB). We observed that the strength of the relationships between self-reported personality traits and supervisor-rated performance significantly decreased after IER was controlled: Across the five personality traits, the average reduction of magnitude from the zero-order to partial correlations was |.08| for task performance, |.07| for OCB, and |.14| for CWB. Because organizational practices are often driven by research linking incumbent-reported predictors to supervisor-rated criteria (e.g., validation of predictors used in various organizational contexts), our findings have important implications for research and practice.
{"title":"Confounding Effects of Insufficient Effort Responding Across Survey Sources: The Case of Personality Predicting Performance","authors":"Jason L. Huang, N. Bowling, Benjamin D. McLarty, Donald H. Kluemper, Zhonghao Wang","doi":"10.1177/10944281231212570","DOIUrl":"https://doi.org/10.1177/10944281231212570","url":null,"abstract":"Insufficient effort responding (IER) to surveys, which occurs when participants provide responses in a haphazard, careless, or random fashion, has been identified as a threat to data quality in survey research because it can inflate observed relationships between self-reported measures. Building on this discovery, we propose two mechanisms that lead to IER exerting an unexpected confounding effect between self-reported and informant-rated measures. First, IER can contaminate self-report measures when the means of attentive and inattentive responses differ. Second, IER may share variance with some informant-rated measures, particularly supervisor ratings of participants’ job performance. These two mechanisms operating in tandem would suggest that IER can act as a “third variable” that inflates observed relationships between self-reported predictor scores and informant-rated criteria. We tested this possibility using a multisource dataset ( N = 398) that included incumbent self-reports of five-factor model personality traits and supervisor-ratings of three job performance dimensions—task performance, organizational citizenship behavior (OCB), and counterproductive work behavior (CWB). We observed that the strength of the relationships between self-reported personality traits and supervisor-rated performance significantly decreased after IER was controlled: Across the five personality traits, the average reduction of magnitude from the zero-order to partial correlations was |.08| for task performance, |.07| for OCB, and |.14| for CWB. Because organizational practices are often driven by research linking incumbent-reported predictors to supervisor-rated criteria (e.g., validation of predictors used in various organizational contexts), our findings have important implications for research and practice.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"119 30","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139605438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-25DOI: 10.1177/10944281231219274
Paul Hünermund, Beyers Louw
Control variables are included in regression analyses to estimate the causal effect of a treatment on an outcome. In this article, we argue that the estimated effect sizes of controls are unlikely to have a causal interpretation themselves, though. This is because even valid controls are possibly endogenous and represent a combination of several different causal mechanisms operating jointly on the outcome, which is hard to interpret theoretically. Therefore, we recommend refraining from interpreting the marginal effects of controls and focusing on the main variables of interest, for which a plausible identification argument can be established. To prevent erroneous managerial or policy implications, coefficients of control variables should be clearly marked as not having a causal interpretation or omitted from regression tables altogether. Moreover, we advise against using control variable estimates for subsequent theory building and meta-analyses.
{"title":"On the Nuisance of Control Variables in Causal Regression Analysis","authors":"Paul Hünermund, Beyers Louw","doi":"10.1177/10944281231219274","DOIUrl":"https://doi.org/10.1177/10944281231219274","url":null,"abstract":"Control variables are included in regression analyses to estimate the causal effect of a treatment on an outcome. In this article, we argue that the estimated effect sizes of controls are unlikely to have a causal interpretation themselves, though. This is because even valid controls are possibly endogenous and represent a combination of several different causal mechanisms operating jointly on the outcome, which is hard to interpret theoretically. Therefore, we recommend refraining from interpreting the marginal effects of controls and focusing on the main variables of interest, for which a plausible identification argument can be established. To prevent erroneous managerial or policy implications, coefficients of control variables should be clearly marked as not having a causal interpretation or omitted from regression tables altogether. Moreover, we advise against using control variable estimates for subsequent theory building and meta-analyses.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"33 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139157393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-21DOI: 10.1177/10944281231221703
Fabian Mändli, Mikko Rönkkö
Over the recent years, two perspectives on control variable use have emerged in management research: the first originates largely from within the management discipline and argues to remain frugal, to use control variables as sparsely as possible. The second is rooted in econometrics textbooks and argues to be prolific, to be generous in control variable inclusion to not risk omitted variable bias, and because including irrelevant exogenous variables has little consequences for regression results. We present two reviews showing that the frugal perspective is becoming increasingly popular in research practice, while the prolific perspective has received little explicit attention. We summarize both perspectives’ key arguments and test their specific recommendations in three Monte Carlo simulations. Our results challenge the two recommendations of the frugal perspective of “omitting impotent controls” and “avoiding proxies” but show the detrimental effects of including endogenous controls (bad controls). We recommend considering the control variable selection problem from the perspective of endogeneity and selecting controls based on theory using causal graphs instead of focusing on the many or few questions.
{"title":"To Omit or to Include? Integrating the Frugal and Prolific Perspectives on Control Variable Use","authors":"Fabian Mändli, Mikko Rönkkö","doi":"10.1177/10944281231221703","DOIUrl":"https://doi.org/10.1177/10944281231221703","url":null,"abstract":"Over the recent years, two perspectives on control variable use have emerged in management research: the first originates largely from within the management discipline and argues to remain frugal, to use control variables as sparsely as possible. The second is rooted in econometrics textbooks and argues to be prolific, to be generous in control variable inclusion to not risk omitted variable bias, and because including irrelevant exogenous variables has little consequences for regression results. We present two reviews showing that the frugal perspective is becoming increasingly popular in research practice, while the prolific perspective has received little explicit attention. We summarize both perspectives’ key arguments and test their specific recommendations in three Monte Carlo simulations. Our results challenge the two recommendations of the frugal perspective of “omitting impotent controls” and “avoiding proxies” but show the detrimental effects of including endogenous controls (bad controls). We recommend considering the control variable selection problem from the perspective of endogeneity and selecting controls based on theory using causal graphs instead of focusing on the many or few questions.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"24 18","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139166153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-20DOI: 10.1177/10944281231215119
Hans Hansen, S. Elias, Anna Stevenson, Anne D. Smith, B. Alexander, Marcos Barros
Based on an analysis of qualitative research papers published between 2019 and 2021 in four top-tier management journals, we outline three interrelated silences that play a role in the objectification of qualitative research: silencing of noninterview data, silencing the researcher, and silencing context. Our analysis unpacks six silencing moves: creating a hierarchy of data, marginalizing noninterview data, downplaying researcher subjectivity, weakening the value of researcher interpretation, thin description, and backgrounding context. We suggest how researchers might resist the objectification of qualitative research and regain its original promise in developing more impactful and interesting theories: noninterview data can be unsilenced by democratizing data sources and utilizing nonverbal data, the researcher can be unsilenced by leveraging engagement and crafting interpretations, and finally, context can be unsilenced by foregrounding context as an interpretative lens and contextualizing the researcher, the researched, and the research project. Overall, we contribute to current understandings of the objectification of qualitative research by both unpacking particular moves that play a role in it and delineating specific practices that help researchers embrace subjectivity and engage in inspired theorizing.
{"title":"Resisting the Objectification of Qualitative Research: The Unsilencing of Context, Researchers, and Noninterview Data","authors":"Hans Hansen, S. Elias, Anna Stevenson, Anne D. Smith, B. Alexander, Marcos Barros","doi":"10.1177/10944281231215119","DOIUrl":"https://doi.org/10.1177/10944281231215119","url":null,"abstract":"Based on an analysis of qualitative research papers published between 2019 and 2021 in four top-tier management journals, we outline three interrelated silences that play a role in the objectification of qualitative research: silencing of noninterview data, silencing the researcher, and silencing context. Our analysis unpacks six silencing moves: creating a hierarchy of data, marginalizing noninterview data, downplaying researcher subjectivity, weakening the value of researcher interpretation, thin description, and backgrounding context. We suggest how researchers might resist the objectification of qualitative research and regain its original promise in developing more impactful and interesting theories: noninterview data can be unsilenced by democratizing data sources and utilizing nonverbal data, the researcher can be unsilenced by leveraging engagement and crafting interpretations, and finally, context can be unsilenced by foregrounding context as an interpretative lens and contextualizing the researcher, the researched, and the research project. Overall, we contribute to current understandings of the objectification of qualitative research by both unpacking particular moves that play a role in it and delineating specific practices that help researchers embrace subjectivity and engage in inspired theorizing.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"18 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139257260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-20DOI: 10.1177/10944281231215024
Michael C. Sturman
Although many have recognized the value of computer simulations as a research tool, instruction on building computer simulations is absent from most doctoral education and research methods texts. This paper provides an introductory tutorial on computer simulations for research and teaching. It shows the techniques needed to create data based on desired relationships among the variables or based on a specified model. The paper also introduces techniques to make data more “interesting,” including adding skew or kurtosis, creating multi-item measures with unreliability, making data multilevel, and incorporating mediated, moderated, and nonlinear relationships. The methods described in the paper are illustrated using Excel, Mplus, and R; furthermore, the functionality of using ChatGPT to create code in R is explored and compared to the paper's illustrative examples. Supplemental files are provided that illustrate each example used in the paper as well as several more advanced techniques mentioned in the paper. The goal of this paper is not to help inform experts on simulation; rather, it is to open up to all readers the powerful potential of this research and teaching tool.
尽管许多人已经认识到计算机模拟作为一种研究工具的价值,但在大多数博士生教育和研究方法教材中,却没有关于构建计算机模拟的指导。本文为研究和教学提供了计算机模拟入门教程。它展示了根据变量之间的预期关系或指定模型创建数据所需的技术。本文还介绍了使数据更 "有趣 "的技术,包括添加偏度或峰度、创建不可靠的多项目测量、使数据多层次以及纳入中介、调节和非线性关系。论文中描述的方法使用 Excel、Mplus 和 R 进行了说明;此外,还探讨了使用 ChatGPT 在 R 中创建代码的功能,并与论文中的示例进行了比较。本文还提供了补充文件,对文中使用的每个示例以及文中提到的几种更高级的技术进行了说明。本文的目的不是帮助专家了解仿真技术,而是向所有读者展示这一研究和教学工具的强大潜力。
{"title":"Real Research with Fake Data: A Tutorial on Conducting Computer Simulation for Research and Teaching","authors":"Michael C. Sturman","doi":"10.1177/10944281231215024","DOIUrl":"https://doi.org/10.1177/10944281231215024","url":null,"abstract":"Although many have recognized the value of computer simulations as a research tool, instruction on building computer simulations is absent from most doctoral education and research methods texts. This paper provides an introductory tutorial on computer simulations for research and teaching. It shows the techniques needed to create data based on desired relationships among the variables or based on a specified model. The paper also introduces techniques to make data more “interesting,” including adding skew or kurtosis, creating multi-item measures with unreliability, making data multilevel, and incorporating mediated, moderated, and nonlinear relationships. The methods described in the paper are illustrated using Excel, Mplus, and R; furthermore, the functionality of using ChatGPT to create code in R is explored and compared to the paper's illustrative examples. Supplemental files are provided that illustrate each example used in the paper as well as several more advanced techniques mentioned in the paper. The goal of this paper is not to help inform experts on simulation; rather, it is to open up to all readers the powerful potential of this research and teaching tool.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139255541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}