Openness to Experience and Overexcitabilities, a Jangle Fallacy With Ethical Implications: A Response to Barry Grant

IF 1.7 Q2 EDUCATION, SPECIAL Roeper Review-A Journal on Gifted Education Pub Date : 2021-04-03 DOI:10.1080/02783193.2021.1881749
M. A. Vuyk, B. Kerr
{"title":"Openness to Experience and Overexcitabilities, a Jangle Fallacy With Ethical Implications: A Response to Barry Grant","authors":"M. A. Vuyk, B. Kerr","doi":"10.1080/02783193.2021.1881749","DOIUrl":null,"url":null,"abstract":"A jangle fallacy is the assumption that measures are different by having different names, although they assess the same underlying construct (Gonzalez et al., in press). This is the case of openness to experience and overexcitabilities, which we affirmed before (Vuyk et al., 2016a, 2016b) and Grant (2021) critiqued. We respond to his claims, insisting that gifted education ought to embrace openness to experience and the Five-Factor Model (FFM) when referring to personality. Studying construct overlap is less common than developing new measures, yet they should be equally common; assessments of potential jangle fallacies include a close study of measure content, factor analysis, multitraitmultimethod matrices, incremental prediction on chosen outcomes, and replications (Gonzalez et al., in press). A multitrait-multimethod matrix was impossible as the Overexcitability Questionnaire-II was the only available instrument. We examined the content of measures, recommended future replications, and focused on factor analysis to determine dimensionality. Grant (2021) incorrectly insisted that in our study, only the model with separate openness facets and OEs could support our hypothesis of openness and OEs representing the same latent constructs. However, the model where each openness and OE pairs loaded as one dimension also represents the equivalence of constructs in factor analysis. We had to operationalize openness and OEs using available instruments, even with the OEQ-II flaws. Grant (2021) regards their common variance of 58% to 75% as supposedly low and highlights the 3% of psychomotor OE after we had already mentioned the overlap was not as expected. A nonpeer-reviewed technical report on IQ tests explains that variances, which are group scores, mean that individual scores on one measure might differ from scores on the other measure. Grant cited this as evidence that “OEs and OtE facet pairs are different constructs sharing common variance” (p. 12; italics added) which does not logically follow. All we can conclude from McGraw’s statement is that measures, even measures of the same construct might differ among individuals even with high correlations; McGraw did not state this is evidence that constructs differ. Openness studies present many forms of organizing the construct; one is the NEO which aligns seamlessly with the five OEs. Other Openness models would have less facetlevel alignment; for example, Woo et al. (2014) with facets of intellectual efficiency, ingenuity, curiosity, esthetics, tolerance, and depth, which do not match directly with every OE yet have a parallel to some and to the general concept. As such, the argument that our study is without merit because two facets and one OE do not directly correspond is flawed; it still seems to be a jangle fallacy. Note a recent example in a meta-analytic correlation of .84 among the disputed construct of grit and the FFM factor of conscientiousness; even though measures did not “perfectly” overlap, they were deemed close enough to represent the same construct (Credé et al., 2017). Grant (2021) erroneously accused us of engaging in researcher degrees of freedom, a questionable reporting practice where researchers select to publish only the models with significant results and leave failed models unpublished (Simmons et al., 2011). In our paper, every decision regarding models is explicitly stated following the six requirements by Simmons et al. (2011) to avoid this practice: We finished data collection with a well-powered N before conducting analyses. We had many more than 20 observations per cell. We reported all variables assessed. We reported failed models. As we did eliminate variables, we reported results with and without said variables. As our models did not include covariates at any stage we had none to report. Simmons et al. do not propose blind rule-following; they question selective and misleading reporting, which their proposed reporting requirements should prevent. This then leads to our moral and ethical stance, which Grant thoroughly criticized, but misunderstood. As psychologists, we are committed to the Ethical Standards for Psychologists, which begin with a familiar principle: strive","PeriodicalId":46979,"journal":{"name":"Roeper Review-A Journal on Gifted Education","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2021-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/02783193.2021.1881749","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Roeper Review-A Journal on Gifted Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/02783193.2021.1881749","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION, SPECIAL","Score":null,"Total":0}
引用次数: 0

Abstract

A jangle fallacy is the assumption that measures are different by having different names, although they assess the same underlying construct (Gonzalez et al., in press). This is the case of openness to experience and overexcitabilities, which we affirmed before (Vuyk et al., 2016a, 2016b) and Grant (2021) critiqued. We respond to his claims, insisting that gifted education ought to embrace openness to experience and the Five-Factor Model (FFM) when referring to personality. Studying construct overlap is less common than developing new measures, yet they should be equally common; assessments of potential jangle fallacies include a close study of measure content, factor analysis, multitraitmultimethod matrices, incremental prediction on chosen outcomes, and replications (Gonzalez et al., in press). A multitrait-multimethod matrix was impossible as the Overexcitability Questionnaire-II was the only available instrument. We examined the content of measures, recommended future replications, and focused on factor analysis to determine dimensionality. Grant (2021) incorrectly insisted that in our study, only the model with separate openness facets and OEs could support our hypothesis of openness and OEs representing the same latent constructs. However, the model where each openness and OE pairs loaded as one dimension also represents the equivalence of constructs in factor analysis. We had to operationalize openness and OEs using available instruments, even with the OEQ-II flaws. Grant (2021) regards their common variance of 58% to 75% as supposedly low and highlights the 3% of psychomotor OE after we had already mentioned the overlap was not as expected. A nonpeer-reviewed technical report on IQ tests explains that variances, which are group scores, mean that individual scores on one measure might differ from scores on the other measure. Grant cited this as evidence that “OEs and OtE facet pairs are different constructs sharing common variance” (p. 12; italics added) which does not logically follow. All we can conclude from McGraw’s statement is that measures, even measures of the same construct might differ among individuals even with high correlations; McGraw did not state this is evidence that constructs differ. Openness studies present many forms of organizing the construct; one is the NEO which aligns seamlessly with the five OEs. Other Openness models would have less facetlevel alignment; for example, Woo et al. (2014) with facets of intellectual efficiency, ingenuity, curiosity, esthetics, tolerance, and depth, which do not match directly with every OE yet have a parallel to some and to the general concept. As such, the argument that our study is without merit because two facets and one OE do not directly correspond is flawed; it still seems to be a jangle fallacy. Note a recent example in a meta-analytic correlation of .84 among the disputed construct of grit and the FFM factor of conscientiousness; even though measures did not “perfectly” overlap, they were deemed close enough to represent the same construct (Credé et al., 2017). Grant (2021) erroneously accused us of engaging in researcher degrees of freedom, a questionable reporting practice where researchers select to publish only the models with significant results and leave failed models unpublished (Simmons et al., 2011). In our paper, every decision regarding models is explicitly stated following the six requirements by Simmons et al. (2011) to avoid this practice: We finished data collection with a well-powered N before conducting analyses. We had many more than 20 observations per cell. We reported all variables assessed. We reported failed models. As we did eliminate variables, we reported results with and without said variables. As our models did not include covariates at any stage we had none to report. Simmons et al. do not propose blind rule-following; they question selective and misleading reporting, which their proposed reporting requirements should prevent. This then leads to our moral and ethical stance, which Grant thoroughly criticized, but misunderstood. As psychologists, we are committed to the Ethical Standards for Psychologists, which begin with a familiar principle: strive
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
开放的经验和过度兴奋,一个带有伦理含义的谬误:对巴里·格兰特的回应
“噪音谬误”是一种假设,即尽管测量方法评估的是相同的基本结构,但它们的名称不同,因此不同(Gonzalez et al., in press)。这就是对经验的开放和过度兴奋的情况,我们之前肯定过(Vuyk等人,2016a, 2016b), Grant(2021)批评过。我们回应了他的说法,坚持认为资优教育应该拥抱开放性的经验和五因素模型(FFM),当涉及到人格。研究结构重叠不如开发新的测量方法常见,但它们应该同样普遍;对潜在噪音谬误的评估包括对测量内容、因素分析、多特征多方法矩阵、对所选结果的增量预测和重复的密切研究(Gonzalez等人,出版中)。由于过度兴奋性问卷- ii是唯一可用的工具,因此不可能采用多特征-多方法矩阵。我们检查了测量的内容,建议未来的重复,并着重于因子分析来确定维度。Grant(2021)错误地认为,在我们的研究中,只有具有独立的开放面和OEs的模型才能支持我们的假设,即开放性和OEs代表相同的潜在构念。然而,每个开放度和OE对作为一个维度加载的模型也代表了因子分析中构式的等价性。我们必须使用可用的工具来操作开放性和OEs,即使存在OEQ-II缺陷。Grant(2021)认为他们的共同方差为58%到75%应该是低的,并在我们已经提到重叠并不像预期的那样之后强调了精神运动性OE的3%。一份关于智商测试的非同行评审技术报告解释说,差异,即群体得分,意味着个人在一项测试中的得分可能与另一项测试的得分不同。Grant引用这一点作为证据,证明“OEs和OtE面对是具有共同方差的不同结构”(第12页;加上斜体)逻辑上不符合。我们能从McGraw的陈述中得出的结论是,即使是对相同构念的测量,即使相关度很高,个体之间也可能存在差异;McGraw并没有说这是构造不同的证据。开放性研究提出了多种组织结构的形式;一个是近地天体,它与五颗近地天体无缝对齐。其他开放模型将有较少的面级对齐;例如,Woo等人(2014)在智力效率、独创性、好奇心、美学、宽容和深度等方面的研究,这些方面与每个OE并不直接匹配,但与某些OE和一般概念有相似之处。因此,认为我们的研究没有价值,因为两个方面和一个OE不直接对应的论点是有缺陷的;这似乎仍然是一个刺耳的谬论。请注意最近的一个例子,在有争议的砂砾结构和责任心的FFM因素之间的元分析相关性为0.84;尽管度量并没有“完美”重叠,但它们被认为足够接近,可以代表相同的结构(cred等人,2017)。Grant(2021)错误地指责我们从事研究人员自由度,这是一种有问题的报告实践,研究人员选择只发表具有重要结果的模型,而不发表失败的模型(Simmons et al., 2011)。在我们的论文中,关于模型的每一个决定都明确地遵循Simmons et al.(2011)的六个要求来避免这种做法:我们在进行分析之前用一个良好的N完成了数据收集。每个细胞有20多个观察结果。我们报告了所有评估的变量。我们报告了失败的模型。当我们消除变量时,我们报告了有或没有这些变量的结果。由于我们的模型在任何阶段都不包括协变量,因此我们没有任何可报告的。Simmons等人没有提出盲目遵守规则;他们质疑有选择性和误导性的报告,他们提出的报告要求应该防止这种情况。这就导致了我们的道德和伦理立场,格兰特对此进行了彻底的批评,但却误解了。作为心理学家,我们致力于心理学家的道德标准,它始于一个熟悉的原则:努力
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
4.30
自引率
20.00%
发文量
33
期刊最新文献
The Life and Work of Marcia Gentry: Providing Opportunities and Promoting Excellence Marcia Gentry: Early Research Passions and Subsequent Important Contributions to Equity and Diverse Students Marcia Gentry as Influencer: Leader, Scholar, Colleague, Friend Memories and Musings: My Experiences with Marcia Gentry and More Inspiring the Future: An Interview with Kenneth Kiewra
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1