首页 > 最新文献

Organizational Research Methods最新文献

英文 中文
Building a Bigger Toolbox: The Construct Validity of Existing and Proposed Measures of Careless Responding to Cognitive Ability Tests 建立一个更大的工具箱:现有和拟议的认知能力测试粗心应答测量方法的结构效度
Pub Date : 2024-02-14 DOI: 10.1177/10944281231223127
Mark C. Ramsey, N. Bowling
Employers commonly use cognitive ability tests in the personnel selection process. Although ability tests are excellent predictors of job performance, their validity may be compromised when test takers engage in careless responding. It is thus important for researchers to have access to effective careless responding measures, which allow researchers to screen for careless responding and to evaluate efforts to prevent careless responding. Previous research has primarily used two types of measures to assess careless responding to ability tests—response time and self-reported carelessness. In the current paper, we expand the careless responding assessment toolbox by examining the construct validity of four additional measures: (1) infrequency, (2) instructed-response, (3) long-string, and (4) intra-individual response variability (IRV) indices. Expanding the available set of careless responding indices is important because the strengths of new indices may offset the weaknesses of existing indices and would allow researchers to better assess heterogeneous careless response behaviors. Across three datasets ( N = 1,193), we found strong support for the validity of the response-time and infrequency indices, and moderate support for the validity of the instructed-response and IRV indices.
在人员选拔过程中,雇主通常会使用认知能力测验。虽然能力测验可以很好地预测工作绩效,但如果应试者粗心应答,测验的有效性就会大打折扣。因此,研究人员必须掌握有效的粗心应答测量方法,以便筛查粗心应答,并对防止粗心应答的工作进行评估。以往的研究主要使用两种方法来评估能力测试中的粗心应答--应答时间和自我报告的粗心。在本文中,我们通过研究另外四种测量方法的建构效度,扩展了粗心应答评估工具箱:(1)不经常性;(2)指导性反应;(3)长字符串;(4)个体内部反应变异性(IRV)指数。扩大粗心应答指数的可用范围非常重要,因为新指数的优势可能会抵消现有指数的不足,并能让研究人员更好地评估异质性粗心应答行为。在三个数据集(N = 1,193)中,我们发现反应时间和不频繁指数的有效性得到了强有力的支持,而指示反应和 IRV 指数的有效性得到了中等程度的支持。
{"title":"Building a Bigger Toolbox: The Construct Validity of Existing and Proposed Measures of Careless Responding to Cognitive Ability Tests","authors":"Mark C. Ramsey, N. Bowling","doi":"10.1177/10944281231223127","DOIUrl":"https://doi.org/10.1177/10944281231223127","url":null,"abstract":"Employers commonly use cognitive ability tests in the personnel selection process. Although ability tests are excellent predictors of job performance, their validity may be compromised when test takers engage in careless responding. It is thus important for researchers to have access to effective careless responding measures, which allow researchers to screen for careless responding and to evaluate efforts to prevent careless responding. Previous research has primarily used two types of measures to assess careless responding to ability tests—response time and self-reported carelessness. In the current paper, we expand the careless responding assessment toolbox by examining the construct validity of four additional measures: (1) infrequency, (2) instructed-response, (3) long-string, and (4) intra-individual response variability (IRV) indices. Expanding the available set of careless responding indices is important because the strengths of new indices may offset the weaknesses of existing indices and would allow researchers to better assess heterogeneous careless response behaviors. Across three datasets ( N = 1,193), we found strong support for the validity of the response-time and infrequency indices, and moderate support for the validity of the instructed-response and IRV indices.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"161 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139839392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mixed-Keying or Desirability-Matching in the Construction of Forced-Choice Measures? An Empirical Investigation and Practical Recommendations 构建强制选择测量中的混合关键或可取性匹配?实证调查与实用建议
Pub Date : 2024-02-14 DOI: 10.1177/10944281241229784
Mengtong Li, Bo Zhang, Lingyue Li, Tianjun Sun, Anna Brown
Forced-choice (FC) measures are becoming increasingly popular as an alternative to single-statement (SS) measures. However, to ensure the practical usefulness of an FC measure, it is crucial to address the tension between psychometric properties and faking resistance by balancing mixed keying and social desirability matching. It is currently unknown from an empirical perspective whether the two design criteria can be reconciled, and how they impact respondent reactions. By conducting a two-wave experimental design, we constructed four FC measures with varying degrees of mixed-keying and social desirability matching from the same statement pool and investigated their differences in terms of psychometric properties, faking resistance, and respondent reactions. Results showed that all FC measures demonstrated comparable reliability and induced similar respondent reactions. Forced-choice measures with stricter social desirability matching were more faking resistant, while FC measures with more mixed keyed blocks had higher convergent validity with the SS measure and displayed similar discriminant and criterion-related validity profiles as the SS benchmark. More importantly, we found that it is possible to strike a balance between social desirability matching and mixed keying, such that FC measures can have adequate psychometric properties and faking resistance. A seven-step recommendation and a tutorial based on the autoFC R package were provided to help readers construct their own FC measures.
强迫选择(FC)测量法作为单一陈述(SS)测量法的替代方法正变得越来越流行。然而,为了确保强迫选择测量的实用性,必须通过平衡混合抠像和社会期望匹配来解决心理测量特性和伪造阻抗之间的矛盾。从实证的角度来看,目前还不清楚这两个设计标准是否可以调和,以及它们如何影响受访者的反应。通过采用两波实验设计,我们从同一个语句库中构建了四种具有不同程度混合抠像和社会宜忌匹配的功能分类测量,并研究了它们在心理测量特性、防伪性和受访者反应方面的差异。结果表明,所有功能认知测验都表现出了相似的可靠性,并引起了类似的受访者反应。具有更严格社会可取性匹配的强迫选择测量具有更强的抗伪造性,而具有更多混合键块的强迫选择测量与社会可取性测量具有更高的收敛效度,并显示出与社会可取性基准相似的判别效度和标准效度。更重要的是,我们发现有可能在社会可取性匹配和混合抠像之间取得平衡,从而使 FC 测量具有足够的心理测量学特性和抗伪造性。我们还提供了七步建议和基于 autoFC R 软件包的教程,以帮助读者构建自己的 FC 测量。
{"title":"Mixed-Keying or Desirability-Matching in the Construction of Forced-Choice Measures? An Empirical Investigation and Practical Recommendations","authors":"Mengtong Li, Bo Zhang, Lingyue Li, Tianjun Sun, Anna Brown","doi":"10.1177/10944281241229784","DOIUrl":"https://doi.org/10.1177/10944281241229784","url":null,"abstract":"Forced-choice (FC) measures are becoming increasingly popular as an alternative to single-statement (SS) measures. However, to ensure the practical usefulness of an FC measure, it is crucial to address the tension between psychometric properties and faking resistance by balancing mixed keying and social desirability matching. It is currently unknown from an empirical perspective whether the two design criteria can be reconciled, and how they impact respondent reactions. By conducting a two-wave experimental design, we constructed four FC measures with varying degrees of mixed-keying and social desirability matching from the same statement pool and investigated their differences in terms of psychometric properties, faking resistance, and respondent reactions. Results showed that all FC measures demonstrated comparable reliability and induced similar respondent reactions. Forced-choice measures with stricter social desirability matching were more faking resistant, while FC measures with more mixed keyed blocks had higher convergent validity with the SS measure and displayed similar discriminant and criterion-related validity profiles as the SS benchmark. More importantly, we found that it is possible to strike a balance between social desirability matching and mixed keying, such that FC measures can have adequate psychometric properties and faking resistance. A seven-step recommendation and a tutorial based on the autoFC R package were provided to help readers construct their own FC measures.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"43 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139779085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mixed-Keying or Desirability-Matching in the Construction of Forced-Choice Measures? An Empirical Investigation and Practical Recommendations 构建强制选择测量中的混合关键或可取性匹配?实证调查与实用建议
Pub Date : 2024-02-14 DOI: 10.1177/10944281241229784
Mengtong Li, Bo Zhang, Lingyue Li, Tianjun Sun, Anna Brown
Forced-choice (FC) measures are becoming increasingly popular as an alternative to single-statement (SS) measures. However, to ensure the practical usefulness of an FC measure, it is crucial to address the tension between psychometric properties and faking resistance by balancing mixed keying and social desirability matching. It is currently unknown from an empirical perspective whether the two design criteria can be reconciled, and how they impact respondent reactions. By conducting a two-wave experimental design, we constructed four FC measures with varying degrees of mixed-keying and social desirability matching from the same statement pool and investigated their differences in terms of psychometric properties, faking resistance, and respondent reactions. Results showed that all FC measures demonstrated comparable reliability and induced similar respondent reactions. Forced-choice measures with stricter social desirability matching were more faking resistant, while FC measures with more mixed keyed blocks had higher convergent validity with the SS measure and displayed similar discriminant and criterion-related validity profiles as the SS benchmark. More importantly, we found that it is possible to strike a balance between social desirability matching and mixed keying, such that FC measures can have adequate psychometric properties and faking resistance. A seven-step recommendation and a tutorial based on the autoFC R package were provided to help readers construct their own FC measures.
强迫选择(FC)测量法作为单一陈述(SS)测量法的替代方法正变得越来越流行。然而,为了确保强迫选择测量的实用性,必须通过平衡混合抠像和社会期望匹配来解决心理测量特性和伪造阻抗之间的矛盾。从实证的角度来看,目前还不清楚这两个设计标准是否可以调和,以及它们如何影响受访者的反应。通过采用两波实验设计,我们从同一个语句库中构建了四种具有不同程度混合抠像和社会宜忌匹配的功能分类测量,并研究了它们在心理测量特性、防伪性和受访者反应方面的差异。结果表明,所有功能认知测验都表现出了相似的可靠性,并引起了类似的受访者反应。具有更严格社会可取性匹配的强迫选择测量具有更强的抗伪造性,而具有更多混合键块的强迫选择测量与社会可取性测量具有更高的收敛效度,并显示出与社会可取性基准相似的判别效度和标准效度。更重要的是,我们发现有可能在社会可取性匹配和混合抠像之间取得平衡,从而使 FC 测量具有足够的心理测量学特性和抗伪造性。我们还提供了七步建议和基于 autoFC R 软件包的教程,以帮助读者构建自己的 FC 测量。
{"title":"Mixed-Keying or Desirability-Matching in the Construction of Forced-Choice Measures? An Empirical Investigation and Practical Recommendations","authors":"Mengtong Li, Bo Zhang, Lingyue Li, Tianjun Sun, Anna Brown","doi":"10.1177/10944281241229784","DOIUrl":"https://doi.org/10.1177/10944281241229784","url":null,"abstract":"Forced-choice (FC) measures are becoming increasingly popular as an alternative to single-statement (SS) measures. However, to ensure the practical usefulness of an FC measure, it is crucial to address the tension between psychometric properties and faking resistance by balancing mixed keying and social desirability matching. It is currently unknown from an empirical perspective whether the two design criteria can be reconciled, and how they impact respondent reactions. By conducting a two-wave experimental design, we constructed four FC measures with varying degrees of mixed-keying and social desirability matching from the same statement pool and investigated their differences in terms of psychometric properties, faking resistance, and respondent reactions. Results showed that all FC measures demonstrated comparable reliability and induced similar respondent reactions. Forced-choice measures with stricter social desirability matching were more faking resistant, while FC measures with more mixed keyed blocks had higher convergent validity with the SS measure and displayed similar discriminant and criterion-related validity profiles as the SS benchmark. More importantly, we found that it is possible to strike a balance between social desirability matching and mixed keying, such that FC measures can have adequate psychometric properties and faking resistance. A seven-step recommendation and a tutorial based on the autoFC R package were provided to help readers construct their own FC measures.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"572 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139839111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Building a Bigger Toolbox: The Construct Validity of Existing and Proposed Measures of Careless Responding to Cognitive Ability Tests 建立一个更大的工具箱:现有和拟议的认知能力测试粗心应答测量方法的结构效度
Pub Date : 2024-02-14 DOI: 10.1177/10944281231223127
Mark C. Ramsey, N. Bowling
Employers commonly use cognitive ability tests in the personnel selection process. Although ability tests are excellent predictors of job performance, their validity may be compromised when test takers engage in careless responding. It is thus important for researchers to have access to effective careless responding measures, which allow researchers to screen for careless responding and to evaluate efforts to prevent careless responding. Previous research has primarily used two types of measures to assess careless responding to ability tests—response time and self-reported carelessness. In the current paper, we expand the careless responding assessment toolbox by examining the construct validity of four additional measures: (1) infrequency, (2) instructed-response, (3) long-string, and (4) intra-individual response variability (IRV) indices. Expanding the available set of careless responding indices is important because the strengths of new indices may offset the weaknesses of existing indices and would allow researchers to better assess heterogeneous careless response behaviors. Across three datasets ( N = 1,193), we found strong support for the validity of the response-time and infrequency indices, and moderate support for the validity of the instructed-response and IRV indices.
在人员选拔过程中,雇主通常会使用认知能力测验。虽然能力测验可以很好地预测工作绩效,但如果应试者粗心应答,测验的有效性就会大打折扣。因此,研究人员必须掌握有效的粗心应答测量方法,以便筛查粗心应答,并对防止粗心应答的工作进行评估。以往的研究主要使用两种方法来评估能力测试中的粗心应答--应答时间和自我报告的粗心。在本文中,我们通过研究另外四种测量方法的建构效度,扩展了粗心应答评估工具箱:(1)不经常性;(2)指导性反应;(3)长字符串;(4)个体内部反应变异性(IRV)指数。扩大粗心应答指数的可用范围非常重要,因为新指数的优势可能会抵消现有指数的不足,并能让研究人员更好地评估异质性粗心应答行为。在三个数据集(N = 1,193)中,我们发现反应时间和不频繁指数的有效性得到了强有力的支持,而指示反应和 IRV 指数的有效性得到了中等程度的支持。
{"title":"Building a Bigger Toolbox: The Construct Validity of Existing and Proposed Measures of Careless Responding to Cognitive Ability Tests","authors":"Mark C. Ramsey, N. Bowling","doi":"10.1177/10944281231223127","DOIUrl":"https://doi.org/10.1177/10944281231223127","url":null,"abstract":"Employers commonly use cognitive ability tests in the personnel selection process. Although ability tests are excellent predictors of job performance, their validity may be compromised when test takers engage in careless responding. It is thus important for researchers to have access to effective careless responding measures, which allow researchers to screen for careless responding and to evaluate efforts to prevent careless responding. Previous research has primarily used two types of measures to assess careless responding to ability tests—response time and self-reported carelessness. In the current paper, we expand the careless responding assessment toolbox by examining the construct validity of four additional measures: (1) infrequency, (2) instructed-response, (3) long-string, and (4) intra-individual response variability (IRV) indices. Expanding the available set of careless responding indices is important because the strengths of new indices may offset the weaknesses of existing indices and would allow researchers to better assess heterogeneous careless response behaviors. Across three datasets ( N = 1,193), we found strong support for the validity of the response-time and infrequency indices, and moderate support for the validity of the instructed-response and IRV indices.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"25 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139779517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Confounding Effects of Insufficient Effort Responding Across Survey Sources: The Case of Personality Predicting Performance 跨调查来源的不充分努力回答的干扰效应:人格预测绩效案例
Pub Date : 2024-01-23 DOI: 10.1177/10944281231212570
Jason L. Huang, N. Bowling, Benjamin D. McLarty, Donald H. Kluemper, Zhonghao Wang
Insufficient effort responding (IER) to surveys, which occurs when participants provide responses in a haphazard, careless, or random fashion, has been identified as a threat to data quality in survey research because it can inflate observed relationships between self-reported measures. Building on this discovery, we propose two mechanisms that lead to IER exerting an unexpected confounding effect between self-reported and informant-rated measures. First, IER can contaminate self-report measures when the means of attentive and inattentive responses differ. Second, IER may share variance with some informant-rated measures, particularly supervisor ratings of participants’ job performance. These two mechanisms operating in tandem would suggest that IER can act as a “third variable” that inflates observed relationships between self-reported predictor scores and informant-rated criteria. We tested this possibility using a multisource dataset ( N = 398) that included incumbent self-reports of five-factor model personality traits and supervisor-ratings of three job performance dimensions—task performance, organizational citizenship behavior (OCB), and counterproductive work behavior (CWB). We observed that the strength of the relationships between self-reported personality traits and supervisor-rated performance significantly decreased after IER was controlled: Across the five personality traits, the average reduction of magnitude from the zero-order to partial correlations was |.08| for task performance, |.07| for OCB, and |.14| for CWB. Because organizational practices are often driven by research linking incumbent-reported predictors to supervisor-rated criteria (e.g., validation of predictors used in various organizational contexts), our findings have important implications for research and practice.
调查中的不充分努力回答(IER)是指参与者以草率、粗心或随机的方式提供回答,它被认为是调查研究中对数据质量的一种威胁,因为它会夸大自我报告测量之间的观察关系。在这一发现的基础上,我们提出了两种机制,它们会导致 IER 在自我报告和线人评定的指标之间产生意想不到的混淆效应。首先,当专注和不专注反应的平均值不同时,IER 会对自我报告测量产生污染。其次,IER 可能与某些信息评定的测量结果存在差异,特别是上司对参与者工作表现的评定。这两种机制的共同作用表明,IER 可以充当 "第三变量",使观察到的自我报告预测分数与信息评定标准之间的关系变得更加复杂。我们使用一个多源数据集(N = 398)对这种可能性进行了检验,该数据集包括在职人员对五因素模型人格特质的自我报告,以及主管对三个工作绩效维度--任务绩效、组织公民行为(OCB)和适得其反的工作行为(CWB)--的评分。我们发现,在控制了 IER 后,自我报告的人格特质与主管评定的绩效之间的关系强度明显降低:在五种人格特质中,从零阶相关到部分相关的平均降低幅度分别为:任务绩效为|.08|,OCB为|.07|,CWB为|.14|。由于组织实践通常是由将任职者报告的预测因子与主管评定标准联系起来的研究驱动的(例如,对各种组织环境中使用的预测因子进行验证),因此我们的发现对研究和实践具有重要意义。
{"title":"Confounding Effects of Insufficient Effort Responding Across Survey Sources: The Case of Personality Predicting Performance","authors":"Jason L. Huang, N. Bowling, Benjamin D. McLarty, Donald H. Kluemper, Zhonghao Wang","doi":"10.1177/10944281231212570","DOIUrl":"https://doi.org/10.1177/10944281231212570","url":null,"abstract":"Insufficient effort responding (IER) to surveys, which occurs when participants provide responses in a haphazard, careless, or random fashion, has been identified as a threat to data quality in survey research because it can inflate observed relationships between self-reported measures. Building on this discovery, we propose two mechanisms that lead to IER exerting an unexpected confounding effect between self-reported and informant-rated measures. First, IER can contaminate self-report measures when the means of attentive and inattentive responses differ. Second, IER may share variance with some informant-rated measures, particularly supervisor ratings of participants’ job performance. These two mechanisms operating in tandem would suggest that IER can act as a “third variable” that inflates observed relationships between self-reported predictor scores and informant-rated criteria. We tested this possibility using a multisource dataset ( N = 398) that included incumbent self-reports of five-factor model personality traits and supervisor-ratings of three job performance dimensions—task performance, organizational citizenship behavior (OCB), and counterproductive work behavior (CWB). We observed that the strength of the relationships between self-reported personality traits and supervisor-rated performance significantly decreased after IER was controlled: Across the five personality traits, the average reduction of magnitude from the zero-order to partial correlations was |.08| for task performance, |.07| for OCB, and |.14| for CWB. Because organizational practices are often driven by research linking incumbent-reported predictors to supervisor-rated criteria (e.g., validation of predictors used in various organizational contexts), our findings have important implications for research and practice.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"119 30","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139605438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Nuisance of Control Variables in Causal Regression Analysis 论因果回归分析中控制变量的滋扰
Pub Date : 2023-12-25 DOI: 10.1177/10944281231219274
Paul Hünermund, Beyers Louw
Control variables are included in regression analyses to estimate the causal effect of a treatment on an outcome. In this article, we argue that the estimated effect sizes of controls are unlikely to have a causal interpretation themselves, though. This is because even valid controls are possibly endogenous and represent a combination of several different causal mechanisms operating jointly on the outcome, which is hard to interpret theoretically. Therefore, we recommend refraining from interpreting the marginal effects of controls and focusing on the main variables of interest, for which a plausible identification argument can be established. To prevent erroneous managerial or policy implications, coefficients of control variables should be clearly marked as not having a causal interpretation or omitted from regression tables altogether. Moreover, we advise against using control variable estimates for subsequent theory building and meta-analyses.
在回归分析中加入控制变量是为了估计治疗对结果的因果效应。但在本文中,我们认为控制变量的估计效应大小本身不太可能具有因果解释。这是因为,即使是有效的对照组也可能是内生的,代表了几种不同的因果机制共同作用于结果的组合,这很难从理论上进行解释。因此,我们建议不要对控制因素的边际效应进行解释,而将注意力集中在主要的相关变量上,因为对这些变量可以进行合理的识别论证。为防止产生错误的管理或政策影响,应明确指出控制变量的系数不具有因果解释作用,或从回归表中完全省略。此外,我们建议不要将控制变量的估计值用于后续的理论构建和元分析。
{"title":"On the Nuisance of Control Variables in Causal Regression Analysis","authors":"Paul Hünermund, Beyers Louw","doi":"10.1177/10944281231219274","DOIUrl":"https://doi.org/10.1177/10944281231219274","url":null,"abstract":"Control variables are included in regression analyses to estimate the causal effect of a treatment on an outcome. In this article, we argue that the estimated effect sizes of controls are unlikely to have a causal interpretation themselves, though. This is because even valid controls are possibly endogenous and represent a combination of several different causal mechanisms operating jointly on the outcome, which is hard to interpret theoretically. Therefore, we recommend refraining from interpreting the marginal effects of controls and focusing on the main variables of interest, for which a plausible identification argument can be established. To prevent erroneous managerial or policy implications, coefficients of control variables should be clearly marked as not having a causal interpretation or omitted from regression tables altogether. Moreover, we advise against using control variable estimates for subsequent theory building and meta-analyses.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"33 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139157393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
To Omit or to Include? Integrating the Frugal and Prolific Perspectives on Control Variable Use 省略还是纳入?整合控制变量使用的节俭与多产观点
Pub Date : 2023-12-21 DOI: 10.1177/10944281231221703
Fabian Mändli, Mikko Rönkkö
Over the recent years, two perspectives on control variable use have emerged in management research: the first originates largely from within the management discipline and argues to remain frugal, to use control variables as sparsely as possible. The second is rooted in econometrics textbooks and argues to be prolific, to be generous in control variable inclusion to not risk omitted variable bias, and because including irrelevant exogenous variables has little consequences for regression results. We present two reviews showing that the frugal perspective is becoming increasingly popular in research practice, while the prolific perspective has received little explicit attention. We summarize both perspectives’ key arguments and test their specific recommendations in three Monte Carlo simulations. Our results challenge the two recommendations of the frugal perspective of “omitting impotent controls” and “avoiding proxies” but show the detrimental effects of including endogenous controls (bad controls). We recommend considering the control variable selection problem from the perspective of endogeneity and selecting controls based on theory using causal graphs instead of focusing on the many or few questions.
近年来,管理研究中出现了两种关于控制变量使用的观点:第一种观点主要源于管理学科内部,主张保持节俭,尽可能少地使用控制变量。第二种观点源于计量经济学教科书,主张多用控制变量,慷慨地加入控制变量以避免遗漏变量偏差的风险,因为加入无关的外生变量对回归结果影响不大。我们提交的两篇评论显示,节俭观点在研究实践中越来越受欢迎,而多产观点则很少受到明确关注。我们总结了两种观点的主要论点,并通过三次蒙特卡罗模拟测试了它们的具体建议。我们的结果对节俭观点中 "省略无效控制 "和 "避免代理 "这两项建议提出了质疑,但也显示了包含内生控制(不良控制)的不利影响。我们建议从内生性的角度来考虑控制变量的选择问题,并根据理论利用因果图来选择控制变量,而不是只关注多或少的问题。
{"title":"To Omit or to Include? Integrating the Frugal and Prolific Perspectives on Control Variable Use","authors":"Fabian Mändli, Mikko Rönkkö","doi":"10.1177/10944281231221703","DOIUrl":"https://doi.org/10.1177/10944281231221703","url":null,"abstract":"Over the recent years, two perspectives on control variable use have emerged in management research: the first originates largely from within the management discipline and argues to remain frugal, to use control variables as sparsely as possible. The second is rooted in econometrics textbooks and argues to be prolific, to be generous in control variable inclusion to not risk omitted variable bias, and because including irrelevant exogenous variables has little consequences for regression results. We present two reviews showing that the frugal perspective is becoming increasingly popular in research practice, while the prolific perspective has received little explicit attention. We summarize both perspectives’ key arguments and test their specific recommendations in three Monte Carlo simulations. Our results challenge the two recommendations of the frugal perspective of “omitting impotent controls” and “avoiding proxies” but show the detrimental effects of including endogenous controls (bad controls). We recommend considering the control variable selection problem from the perspective of endogeneity and selecting controls based on theory using causal graphs instead of focusing on the many or few questions.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"24 18","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139166153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resisting the Objectification of Qualitative Research: The Unsilencing of Context, Researchers, and Noninterview Data 抵制定性研究的客观化:背景、研究人员和非访谈数据的无声化
Pub Date : 2023-11-20 DOI: 10.1177/10944281231215119
Hans Hansen, S. Elias, Anna Stevenson, Anne D. Smith, B. Alexander, Marcos Barros
Based on an analysis of qualitative research papers published between 2019 and 2021 in four top-tier management journals, we outline three interrelated silences that play a role in the objectification of qualitative research: silencing of noninterview data, silencing the researcher, and silencing context. Our analysis unpacks six silencing moves: creating a hierarchy of data, marginalizing noninterview data, downplaying researcher subjectivity, weakening the value of researcher interpretation, thin description, and backgrounding context. We suggest how researchers might resist the objectification of qualitative research and regain its original promise in developing more impactful and interesting theories: noninterview data can be unsilenced by democratizing data sources and utilizing nonverbal data, the researcher can be unsilenced by leveraging engagement and crafting interpretations, and finally, context can be unsilenced by foregrounding context as an interpretative lens and contextualizing the researcher, the researched, and the research project. Overall, we contribute to current understandings of the objectification of qualitative research by both unpacking particular moves that play a role in it and delineating specific practices that help researchers embrace subjectivity and engage in inspired theorizing.
基于对2019年至2021年期间发表在四本顶级管理期刊上的定性研究论文的分析,我们概述了在定性研究客观化过程中起作用的三种相互关联的沉默:对非访谈数据的沉默、对研究者的沉默和对背景的沉默。我们的分析揭示了六种缄默行为:建立数据等级制度、边缘化非访谈数据、淡化研究者的主观性、削弱研究者解释的价值、稀薄描述以及背景化。我们建议研究人员如何抵制定性研究的客观化,并在发展更有影响力、更有趣的理论时重拾其最初的承诺:可以通过数据源民主化和利用非语言数据来消除非访谈数据的无声化,可以通过利用参与和精心制作解释来消除研究人员的无声化,最后,可以通过将背景作为解释透镜并将研究人员、被研究者和研究项目背景化来消除背景的无声化。总之,我们通过解读在定性研究客体化过程中发挥作用的特殊举措,以及界定有助于研究人员接受主观性并参与灵感理论化的具体实践,为当前对定性研究客体化的理解做出了贡献。
{"title":"Resisting the Objectification of Qualitative Research: The Unsilencing of Context, Researchers, and Noninterview Data","authors":"Hans Hansen, S. Elias, Anna Stevenson, Anne D. Smith, B. Alexander, Marcos Barros","doi":"10.1177/10944281231215119","DOIUrl":"https://doi.org/10.1177/10944281231215119","url":null,"abstract":"Based on an analysis of qualitative research papers published between 2019 and 2021 in four top-tier management journals, we outline three interrelated silences that play a role in the objectification of qualitative research: silencing of noninterview data, silencing the researcher, and silencing context. Our analysis unpacks six silencing moves: creating a hierarchy of data, marginalizing noninterview data, downplaying researcher subjectivity, weakening the value of researcher interpretation, thin description, and backgrounding context. We suggest how researchers might resist the objectification of qualitative research and regain its original promise in developing more impactful and interesting theories: noninterview data can be unsilenced by democratizing data sources and utilizing nonverbal data, the researcher can be unsilenced by leveraging engagement and crafting interpretations, and finally, context can be unsilenced by foregrounding context as an interpretative lens and contextualizing the researcher, the researched, and the research project. Overall, we contribute to current understandings of the objectification of qualitative research by both unpacking particular moves that play a role in it and delineating specific practices that help researchers embrace subjectivity and engage in inspired theorizing.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"18 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139257260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real Research with Fake Data: A Tutorial on Conducting Computer Simulation for Research and Teaching 用虚假数据进行真实研究:计算机模拟研究与教学教程
Pub Date : 2023-11-20 DOI: 10.1177/10944281231215024
Michael C. Sturman
Although many have recognized the value of computer simulations as a research tool, instruction on building computer simulations is absent from most doctoral education and research methods texts. This paper provides an introductory tutorial on computer simulations for research and teaching. It shows the techniques needed to create data based on desired relationships among the variables or based on a specified model. The paper also introduces techniques to make data more “interesting,” including adding skew or kurtosis, creating multi-item measures with unreliability, making data multilevel, and incorporating mediated, moderated, and nonlinear relationships. The methods described in the paper are illustrated using Excel, Mplus, and R; furthermore, the functionality of using ChatGPT to create code in R is explored and compared to the paper's illustrative examples. Supplemental files are provided that illustrate each example used in the paper as well as several more advanced techniques mentioned in the paper. The goal of this paper is not to help inform experts on simulation; rather, it is to open up to all readers the powerful potential of this research and teaching tool.
尽管许多人已经认识到计算机模拟作为一种研究工具的价值,但在大多数博士生教育和研究方法教材中,却没有关于构建计算机模拟的指导。本文为研究和教学提供了计算机模拟入门教程。它展示了根据变量之间的预期关系或指定模型创建数据所需的技术。本文还介绍了使数据更 "有趣 "的技术,包括添加偏度或峰度、创建不可靠的多项目测量、使数据多层次以及纳入中介、调节和非线性关系。论文中描述的方法使用 Excel、Mplus 和 R 进行了说明;此外,还探讨了使用 ChatGPT 在 R 中创建代码的功能,并与论文中的示例进行了比较。本文还提供了补充文件,对文中使用的每个示例以及文中提到的几种更高级的技术进行了说明。本文的目的不是帮助专家了解仿真技术,而是向所有读者展示这一研究和教学工具的强大潜力。
{"title":"Real Research with Fake Data: A Tutorial on Conducting Computer Simulation for Research and Teaching","authors":"Michael C. Sturman","doi":"10.1177/10944281231215024","DOIUrl":"https://doi.org/10.1177/10944281231215024","url":null,"abstract":"Although many have recognized the value of computer simulations as a research tool, instruction on building computer simulations is absent from most doctoral education and research methods texts. This paper provides an introductory tutorial on computer simulations for research and teaching. It shows the techniques needed to create data based on desired relationships among the variables or based on a specified model. The paper also introduces techniques to make data more “interesting,” including adding skew or kurtosis, creating multi-item measures with unreliability, making data multilevel, and incorporating mediated, moderated, and nonlinear relationships. The methods described in the paper are illustrated using Excel, Mplus, and R; furthermore, the functionality of using ChatGPT to create code in R is explored and compared to the paper's illustrative examples. Supplemental files are provided that illustrate each example used in the paper as well as several more advanced techniques mentioned in the paper. The goal of this paper is not to help inform experts on simulation; rather, it is to open up to all readers the powerful potential of this research and teaching tool.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139255541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Organizational Research Methods
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1