首页 > 最新文献

Methodology: European Journal of Research Methods for The Behavioral and Social Sciences最新文献

英文 中文
The Cognitive Interviewing Reporting Framework (CIRF): towards the harmonization of cognitive testing reports. 认知访谈报告框架(CIRF):迈向认知测试报告的统一。
IF 3.1 3区 心理学 Q2 PSYCHOLOGY, MATHEMATICAL Pub Date : 2013-01-01 DOI: 10.1027/1614-2241/A000075
H. Boeije, Gordon B. Willis
Cognitive interviewing is an important qualitative tool for the testing, development, and evaluation of survey questionnaires. Despite the widespread adoption of cognitive testing, there remain large variations in the manner in which specific procedures are implemented, and it is not clear from reports and publications that have utilized cognitive interviewing exactly what procedures have been used, as critical details are often missing. Especially for establishing the effectiveness of procedural variants, it is essential that cognitive interviewing reports contain a comprehensive description of the methods used. One approach to working toward more complete reporting would be to develop and adhere to a common framework for reporting these results. In this article we introduce the Cognitive Interviewing Reporting Framework (CIRF), which applies a checklist approach, and which is based on several existing checklists for reviewing and reporting qualitative research. We propose that researchers apply the CIRF in order to test its usability and to suggest potential adjustments. Over the longer term, the CIRF can be evaluated with respect to its utility in improving the quality of cognitive interviewing reports.
认知访谈是一种重要的定性工具,用于测试、开发和评估调查问卷。尽管认知测试被广泛采用,但具体程序的实施方式仍存在很大差异,而且从使用认知访谈的报告和出版物中并不清楚究竟使用了什么程序,因为关键细节往往缺失。特别是为了建立程序变量的有效性,认知访谈报告必须包含所使用方法的全面描述。实现更完整报告的一种方法是制定并坚持一个报告这些结果的共同框架。在本文中,我们介绍了认知访谈报告框架(CIRF),它应用清单方法,并基于几个现有的审查和报告定性研究的清单。我们建议研究人员应用CIRF来测试其可用性并提出潜在的调整建议。从长远来看,CIRF在提高认知访谈报告质量方面的效用可以得到评估。
{"title":"The Cognitive Interviewing Reporting Framework (CIRF): towards the harmonization of cognitive testing reports.","authors":"H. Boeije, Gordon B. Willis","doi":"10.1027/1614-2241/A000075","DOIUrl":"https://doi.org/10.1027/1614-2241/A000075","url":null,"abstract":"Cognitive interviewing is an important qualitative tool for the testing, development, and evaluation of survey questionnaires. Despite the widespread adoption of cognitive testing, there remain large variations in the manner in which specific procedures are implemented, and it is not clear from reports and publications that have utilized cognitive interviewing exactly what procedures have been used, as critical details are often missing. Especially for establishing the effectiveness of procedural variants, it is essential that cognitive interviewing reports contain a comprehensive description of the methods used. One approach to working toward more complete reporting would be to develop and adhere to a common framework for reporting these results. In this article we introduce the Cognitive Interviewing Reporting Framework (CIRF), which applies a checklist approach, and which is based on several existing checklists for reviewing and reporting qualitative research. We propose that researchers apply the CIRF in order to test its usability and to suggest potential adjustments. Over the longer term, the CIRF can be evaluated with respect to its utility in improving the quality of cognitive interviewing reports.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"9 1","pages":"87-95"},"PeriodicalIF":3.1,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 82
Reflections on the Cognitive Interviewing Reporting Framework: Efficacy, expectations, and promise for the future. 对认知访谈报告框架的反思:效能、期望和对未来的承诺。
IF 3.1 3区 心理学 Q2 PSYCHOLOGY, MATHEMATICAL Pub Date : 2013-01-01 DOI: 10.1027/1614-2241/A000074
Gordon B. Willis, H. Boeije
Based on the experiences of three research groups using and evaluating the Cognitive Interviewing Reporting Framework (CIRF), we draw conclusions about the utility of the CIRF as a guide to creating cognitive testing reports. Authors generally found the CIRF checklist to be usable, and that it led to a more complete description of key steps involved. However, despite the explicit direction by the CIRF to include a full explanation of major steps and features (e.g., research objectives and research design), the three cognitive testing reports tended to simply state what was done, without further justification. Authors varied in their judgments concerning whether the CIRF requires the appropriate level of detail. Overall, we believe that current cognitive interviewing practice will benefit from including, within cognitive testing reports, the 10 categories of information specified by the CIRF. Future use of the CIRF may serve to direct the overall research project from the start, and to further the goal of ...
基于三个研究小组使用和评估认知访谈报告框架(CIRF)的经验,我们得出了CIRF作为创建认知测试报告指南的效用的结论。作者通常发现CIRF检查表是可用的,并且它导致了对所涉及的关键步骤的更完整的描述。然而,尽管CIRF明确指示包括主要步骤和特征的完整解释(例如,研究目标和研究设计),三份认知测试报告倾向于简单地陈述所做的事情,没有进一步的理由。作者对CIRF是否需要适当程度的细节有不同的判断。总的来说,我们认为当前的认知访谈实践将受益于在认知测试报告中包括CIRF指定的10类信息。未来使用CIRF可以从一开始就指导整个研究项目,并进一步实现…
{"title":"Reflections on the Cognitive Interviewing Reporting Framework: Efficacy, expectations, and promise for the future.","authors":"Gordon B. Willis, H. Boeije","doi":"10.1027/1614-2241/A000074","DOIUrl":"https://doi.org/10.1027/1614-2241/A000074","url":null,"abstract":"Based on the experiences of three research groups using and evaluating the Cognitive Interviewing Reporting Framework (CIRF), we draw conclusions about the utility of the CIRF as a guide to creating cognitive testing reports. Authors generally found the CIRF checklist to be usable, and that it led to a more complete description of key steps involved. However, despite the explicit direction by the CIRF to include a full explanation of major steps and features (e.g., research objectives and research design), the three cognitive testing reports tended to simply state what was done, without further justification. Authors varied in their judgments concerning whether the CIRF requires the appropriate level of detail. Overall, we believe that current cognitive interviewing practice will benefit from including, within cognitive testing reports, the 10 categories of information specified by the CIRF. Future use of the CIRF may serve to direct the overall research project from the start, and to further the goal of ...","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"9 1","pages":"123-128"},"PeriodicalIF":3.1,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Analyzing observed composite differences across groups: Is partial measurement invariance enough? 分析观察到的组间综合差异:部分测量不变性是否足够?
IF 3.1 3区 心理学 Q2 PSYCHOLOGY, MATHEMATICAL Pub Date : 2013-01-01 DOI: 10.1027/1614-2241/A000049
Holger Steinmetz
Although the use of structural equation modeling has increased during the last decades, the typical procedure to investigate mean differences across groups is still to create an observed composite score from several indicators and to compare the composite’s mean across the groups. Whereas the structural equation modeling literature has emphasized that a comparison of latent means presupposes equal factor loadings and indicator intercepts for most of the indicators (i.e., partial invariance), it is still unknown if partial invariance is sufficient when relying on observed composites. This Monte-Carlo study investigated whether one or two unequal factor loadings and indicator intercepts in a composite can lead to wrong conclusions regarding latent mean differences. Results show that unequal indicator intercepts substantially affect the composite mean difference and the probability of a significant composite difference. In contrast, unequal factor loadings demonstrate only small effects. It is concluded that...
虽然结构方程模型的使用在过去几十年中有所增加,但调查组间平均差异的典型程序仍然是从几个指标中创建观察到的综合得分,并比较组间综合得分的平均值。尽管结构方程建模文献强调,潜在均值的比较以大多数指标(即部分不变性)的因子载荷和指标截距相等为前提,但依赖于观察到的复合材料时,部分不变性是否足够仍然未知。这项蒙特卡罗研究调查了一个或两个不相等的因素负荷和指标截点是否会导致关于潜在均值差异的错误结论。结果表明,不相等的指标截距极大地影响了综合平均差和显著综合差的概率。相反,不相等的因子负荷只显示出很小的影响。结论是……
{"title":"Analyzing observed composite differences across groups: Is partial measurement invariance enough?","authors":"Holger Steinmetz","doi":"10.1027/1614-2241/A000049","DOIUrl":"https://doi.org/10.1027/1614-2241/A000049","url":null,"abstract":"Although the use of structural equation modeling has increased during the last decades, the typical procedure to investigate mean differences across groups is still to create an observed composite score from several indicators and to compare the composite’s mean across the groups. Whereas the structural equation modeling literature has emphasized that a comparison of latent means presupposes equal factor loadings and indicator intercepts for most of the indicators (i.e., partial invariance), it is still unknown if partial invariance is sufficient when relying on observed composites. This Monte-Carlo study investigated whether one or two unequal factor loadings and indicator intercepts in a composite can lead to wrong conclusions regarding latent mean differences. Results show that unequal indicator intercepts substantially affect the composite mean difference and the probability of a significant composite difference. In contrast, unequal factor loadings demonstrate only small effects. It is concluded that...","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"9 1","pages":"1-12"},"PeriodicalIF":3.1,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 228
Non-Graphical Solutions for Cattell’s Scree Test Cattell的屏幕测试的非图形解决方案
IF 3.1 3区 心理学 Q2 PSYCHOLOGY, MATHEMATICAL Pub Date : 2013-01-01 DOI: 10.1027/1614-2241/A000051
Gilles Raîche, Theodore A. Walls, D. Magis, Martin Riopel, J. Blais
Most of the strategies that have been proposed to determine the number of components that account for the most variation in a principal components analysis of a correlation matrix rely on the analysis of the eigenvalues and on numerical solutions. The Cattell's scree test is a graphical strategy with a nonnumerical solution to determine the number of components to retain. Like Kaiser's rule, this test is one of the most frequently used strategies for determining the number of components to retain. However, the graphical nature of the scree test does not definitively establish the number of components to retain. To circumvent this issue, some numerical solutions are proposed, one in the spirit of Cattell's work and dealing with the scree part of the eigenvalues plot, and one focusing on the elbow part of this plot. A simulation study compares the efficiency of these solutions to those of other previously proposed methods. Extensions to factor analysis are possible and may be particularly useful with many low-dimensional components. Several strategies have been proposed to determine the num- ber of components that account for the most variation in a principal components analysis of a correlation matrix. Most of these rely on the analysis of the eigenvalues of the corre- lation matrix and on numerical solutions. For example, Kaiser's eigenvalue greater than one rule (Guttman, 1954; Kaiser, 1960), parallel analysis (Buja & Eyuboglu, 1992; Horn, 1965; Hoyle & Duvall, 2004), or hypothesis signifi- cance tests, like Bartlett's test (1950), make use of numerical criteria for comparison or statistical significance criteria. Independently of these numerical solutions, Cattell (1966) proposed the scree test, a graphical strategy to determine the number of components to retain. Along with the Kaiser's rule, the scree test is probably the most used strategy and it is included in almost all statistical software dealing with principal components analysis. Unfortunately, it is generally recognized that the graphical nature of the Cattell's scree test does not enable clear decision-making about the number of components to retain. The previously proposed non-graphical solutions for
在相关矩阵的主成分分析中,大多数被提出的策略都依赖于特征值和数值解的分析,以确定占最大变化的成分的数量。卡特尔的屏幕测试是一种图形策略,具有非数值解决方案,以确定要保留的组件数量。就像Kaiser的规则一样,这个测试是决定要保留的组件数量的最常用策略之一。然而,屏幕测试的图形性质并不能确定要保留的组件数量。为了避免这个问题,提出了一些数值解,一个是在卡特尔的工作精神,处理特征值图的屏幕部分,另一个是关注这个图的肘部部分。仿真研究将这些解决方案的效率与其他先前提出的方法进行了比较。因子分析的扩展是可能的,并且可能对许多低维组件特别有用。已经提出了几种策略来确定在相关矩阵的主成分分析中占最大变化的成分数。这些方法大多依赖于相关矩阵特征值的分析和数值解。例如,Kaiser的特征值大于一个规则(Guttman, 1954;Kaiser, 1960),平行分析(Buja & Eyuboglu, 1992;角,1965;Hoyle & Duvall, 2004)或假设显著性检验,如Bartlett检验(1950),使用数值标准进行比较或统计显著性标准。独立于这些数值解决方案,卡特尔(1966)提出了屏幕测试,一种图形策略,以确定要保留的组件数量。与Kaiser规则一样,屏幕测试可能是最常用的策略,它包含在几乎所有处理主成分分析的统计软件中。不幸的是,人们普遍认为,卡特尔的屏幕测试的图形性质并不能明确地决定要保留的组件数量。的非图形化解决方案
{"title":"Non-Graphical Solutions for Cattell’s Scree Test","authors":"Gilles Raîche, Theodore A. Walls, D. Magis, Martin Riopel, J. Blais","doi":"10.1027/1614-2241/A000051","DOIUrl":"https://doi.org/10.1027/1614-2241/A000051","url":null,"abstract":"Most of the strategies that have been proposed to determine the number of components that account for the most variation in a principal components analysis of a correlation matrix rely on the analysis of the eigenvalues and on numerical solutions. The Cattell's scree test is a graphical strategy with a nonnumerical solution to determine the number of components to retain. Like Kaiser's rule, this test is one of the most frequently used strategies for determining the number of components to retain. However, the graphical nature of the scree test does not definitively establish the number of components to retain. To circumvent this issue, some numerical solutions are proposed, one in the spirit of Cattell's work and dealing with the scree part of the eigenvalues plot, and one focusing on the elbow part of this plot. A simulation study compares the efficiency of these solutions to those of other previously proposed methods. Extensions to factor analysis are possible and may be particularly useful with many low-dimensional components. Several strategies have been proposed to determine the num- ber of components that account for the most variation in a principal components analysis of a correlation matrix. Most of these rely on the analysis of the eigenvalues of the corre- lation matrix and on numerical solutions. For example, Kaiser's eigenvalue greater than one rule (Guttman, 1954; Kaiser, 1960), parallel analysis (Buja & Eyuboglu, 1992; Horn, 1965; Hoyle & Duvall, 2004), or hypothesis signifi- cance tests, like Bartlett's test (1950), make use of numerical criteria for comparison or statistical significance criteria. Independently of these numerical solutions, Cattell (1966) proposed the scree test, a graphical strategy to determine the number of components to retain. Along with the Kaiser's rule, the scree test is probably the most used strategy and it is included in almost all statistical software dealing with principal components analysis. Unfortunately, it is generally recognized that the graphical nature of the Cattell's scree test does not enable clear decision-making about the number of components to retain. The previously proposed non-graphical solutions for","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"9 1","pages":"23-29"},"PeriodicalIF":3.1,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1027/1614-2241/A000051","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 280
The Survey Field Needs a Framework for the Systematic Reporting of Questionnaire Development and Pretesting 调查领域需要一个系统报告问卷开发和预测的框架
IF 3.1 3区 心理学 Q2 PSYCHOLOGY, MATHEMATICAL Pub Date : 2013-01-01 DOI: 10.1027/1614-2241/A000070
Gordon B. Willis, H. Boeije
{"title":"The Survey Field Needs a Framework for the Systematic Reporting of Questionnaire Development and Pretesting","authors":"Gordon B. Willis, H. Boeije","doi":"10.1027/1614-2241/A000070","DOIUrl":"https://doi.org/10.1027/1614-2241/A000070","url":null,"abstract":"","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"9 1","pages":"85-86"},"PeriodicalIF":3.1,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An Improved Model for Evaluating Change in Randomized Pretest, Posttest, Follow-Up Designs 评价随机前测、后测、随访设计变化的改进模型
IF 3.1 3区 心理学 Q2 PSYCHOLOGY, MATHEMATICAL Pub Date : 2012-01-01 DOI: 10.1027/1614-2241/A000041
C. Mara, R. Cribbie, D. Flora, Cathy Labrish, Laura Mills, L. Fiksenbaum
Randomized pretest, posttest, follow-up (RPPF) designs are often used for evaluating the effectiveness of an intervention. These designs typically address two primary research questions: (1) Do the treatment and control groups differ in the amount of change from pretest to posttest? and (2) Do the treatment and control groups differ in the amount of change from posttest to follow-up? This study presents a model for answering these questions and compares it to recently proposed models for analyzing RPPF designs due to Mun, von Eye, and White (2009) using Monte Carlo simulation. The proposed model provides increased power over previous models for evaluating group differences in RPPF designs.
随机前测、后测、随访(RPPF)设计常用于评估干预措施的有效性。这些设计通常解决两个主要的研究问题:(1)治疗组和对照组在测试前和测试后的变化量上是否不同?(2)治疗组和对照组从测试后到随访的变化量是否不同?本研究提出了一个模型来回答这些问题,并将其与最近提出的模型进行比较,该模型用于分析Mun, von Eye和White(2009)使用蒙特卡罗模拟的RPPF设计。所提出的模型在评估RPPF设计的组差异方面提供了比以前的模型更大的能力。
{"title":"An Improved Model for Evaluating Change in Randomized Pretest, Posttest, Follow-Up Designs","authors":"C. Mara, R. Cribbie, D. Flora, Cathy Labrish, Laura Mills, L. Fiksenbaum","doi":"10.1027/1614-2241/A000041","DOIUrl":"https://doi.org/10.1027/1614-2241/A000041","url":null,"abstract":"Randomized pretest, posttest, follow-up (RPPF) designs are often used for evaluating the effectiveness of an intervention. These designs typically address two primary research questions: (1) Do the treatment and control groups differ in the amount of change from pretest to posttest? and (2) Do the treatment and control groups differ in the amount of change from posttest to follow-up? This study presents a model for answering these questions and compares it to recently proposed models for analyzing RPPF designs due to Mun, von Eye, and White (2009) using Monte Carlo simulation. The proposed model provides increased power over previous models for evaluating group differences in RPPF designs.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"8 1","pages":"97-103"},"PeriodicalIF":3.1,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Estimation of and Confidence Interval Formation for Reliability Coefficients of Homogeneous Measurement Instruments 同质测量仪器可靠性系数的估计及置信区间的形成
IF 3.1 3区 心理学 Q2 PSYCHOLOGY, MATHEMATICAL Pub Date : 2012-01-01 DOI: 10.1027/1614-2241/A000036
Ken Kelley, Ying Cheng
The reliability of a composite score is a fundamental and important topic in the social and behavioral sciences. The most commonly used reliability estimate of a composite score is coefficient a. However, under regularity conditions, the population value of coefficient a is only a lower bound on the population reliability, unless the items are essentially s-equivalent, an assumption that is likely violated in most applications. A generalization of coefficient a, termed x, is discussed and generally recommended. Furthermore, a point estimate itself almost certainly differs from the population value. Therefore, it is important to provide confidence interval limits so as not to overinterpret the point estimate. Analytic and bootstrap methods are described in detail for confidence interval construction for x .W e go on to recommend the bias-corrected bootstrap approach for x and provide open source and freely available R functions via the MBESS package to implement the methods discussed.
综合分数的可靠性是社会和行为科学中一个基本而重要的课题。综合分数最常用的可靠性估计是系数a。然而,在规则条件下,系数a的总体值只是总体可靠性的下界,除非项目本质上是s等效的,这一假设在大多数应用中可能被违反。讨论并推荐系数A的一般化,称为x。此外,点估计值本身几乎肯定不同于总体值。因此,重要的是提供置信区间限制,以免过度解释点估计。详细描述了分析方法和自举方法用于x的置信区间构造。我们继续推荐x的偏差校正自举方法,并通过MBESS包提供开源和免费可用的R函数来实现所讨论的方法。
{"title":"Estimation of and Confidence Interval Formation for Reliability Coefficients of Homogeneous Measurement Instruments","authors":"Ken Kelley, Ying Cheng","doi":"10.1027/1614-2241/A000036","DOIUrl":"https://doi.org/10.1027/1614-2241/A000036","url":null,"abstract":"The reliability of a composite score is a fundamental and important topic in the social and behavioral sciences. The most commonly used reliability estimate of a composite score is coefficient a. However, under regularity conditions, the population value of coefficient a is only a lower bound on the population reliability, unless the items are essentially s-equivalent, an assumption that is likely violated in most applications. A generalization of coefficient a, termed x, is discussed and generally recommended. Furthermore, a point estimate itself almost certainly differs from the population value. Therefore, it is important to provide confidence interval limits so as not to overinterpret the point estimate. Analytic and bootstrap methods are described in detail for confidence interval construction for x .W e go on to recommend the bias-corrected bootstrap approach for x and provide open source and freely available R functions via the MBESS package to implement the methods discussed.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"8 1","pages":"39-50"},"PeriodicalIF":3.1,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Assessing Content Validity Through Correlation and Relevance Tools A Bayesian Randomized Equivalence Experiment 通过相关和关联工具评估内容效度:贝叶斯随机等效实验
IF 3.1 3区 心理学 Q2 PSYCHOLOGY, MATHEMATICAL Pub Date : 2012-01-01 DOI: 10.1027/1614-2241/A000040
B. Gajewski, Valorie Coffland, D. Boyle, M. Bott, L. Price, Jamie Leopold, N. Dunton
Content validity elicits expert opinion regarding items of a psychometric instrument. Expert opinion can be elicited in many forms: for example, how essential an item is or its relevancy to a domain. This study developed an alternative tool that elicits expert opinion regarding correlations between each item and its respective domain. With 109 Registered Nurse (RN) site coordinators from National Database of Nursing Quality Indicators, we implemented a randomized Bayesian equivalence trial with coordinators completing ''relevance'' or ''correlation'' content tools regarding the RN Job Enjoyment Scale. We confirmed our hypothesis that the two tools would result in equivalent content information. A Bayesian ordered analysis model supported the results, suggesting that evidence for traditional content validity indices can be justified using correlation arguments.
内容效度引出专家对心理测量工具项目的意见。专家意见可以以多种形式引出:例如,一个项目有多重要,或者它与一个领域的相关性。本研究开发了一种替代工具,可以引出关于每个项目与其各自领域之间相关性的专家意见。109名注册护士(RN)现场协调员来自国家护理质量指标数据库,我们实施了一项随机贝叶斯等效试验,协调员完成了关于注册护士工作享受量表的“相关性”或“相关性”内容工具。我们确认了我们的假设,即这两个工具将产生相同的内容信息。贝叶斯有序分析模型支持结果,表明传统内容效度指标的证据可以使用相关参数来证明。
{"title":"Assessing Content Validity Through Correlation and Relevance Tools A Bayesian Randomized Equivalence Experiment","authors":"B. Gajewski, Valorie Coffland, D. Boyle, M. Bott, L. Price, Jamie Leopold, N. Dunton","doi":"10.1027/1614-2241/A000040","DOIUrl":"https://doi.org/10.1027/1614-2241/A000040","url":null,"abstract":"Content validity elicits expert opinion regarding items of a psychometric instrument. Expert opinion can be elicited in many forms: for example, how essential an item is or its relevancy to a domain. This study developed an alternative tool that elicits expert opinion regarding correlations between each item and its respective domain. With 109 Registered Nurse (RN) site coordinators from National Database of Nursing Quality Indicators, we implemented a randomized Bayesian equivalence trial with coordinators completing ''relevance'' or ''correlation'' content tools regarding the RN Job Enjoyment Scale. We confirmed our hypothesis that the two tools would result in equivalent content information. A Bayesian ordered analysis model supported the results, suggesting that evidence for traditional content validity indices can be justified using correlation arguments.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"8 1","pages":"81-96"},"PeriodicalIF":3.1,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Exploiting Prior Information in Stochastic Knowledge Assessment 先验信息在随机知识评估中的应用
IF 3.1 3区 心理学 Q2 PSYCHOLOGY, MATHEMATICAL Pub Date : 2012-01-01 DOI: 10.1027/1614-2241/A000035
J. Heller, Claudia Repitsch
Various adaptive procedures for efficiently assessing the knowledge state of an individual have been developed within the theory of knowledge structures. These procedures set out to draw a detailed picture of an individual’s knowledge in a certain field by posing a minimal number of questions. While research so far mostly emphasized theoretical issues, the present paper focuses on an empirical evaluation of probabilistic assessment. It reports on simulation data showing that both efficiency and accuracy of the assessment exhibit considerable sensitivity to the choice of parameters and prior information as captured by the initial likelihood of the knowledge states. In order to deal with problems that arise from incorrect prior information, an extension of the probabilistic assessment is proposed. Systematic simulations provide evidence for the efficiency and robustness of the proposed extension, as well as its feasibility in terms of computational costs.
在知识结构理论中,各种有效评估个人知识状态的适应性程序已经发展起来。这些程序旨在通过提出最少数量的问题来详细描绘个人在某一领域的知识。迄今为止的研究大多侧重于理论问题,本文着重于对概率评估的实证评估。它对模拟数据的报告表明,评估的效率和准确性对参数的选择和由知识状态的初始似然捕获的先验信息都表现出相当大的敏感性。为了处理由于不正确的先验信息引起的问题,提出了一种扩展的概率评估方法。系统仿真结果证明了该方法的有效性和鲁棒性,以及在计算成本方面的可行性。
{"title":"Exploiting Prior Information in Stochastic Knowledge Assessment","authors":"J. Heller, Claudia Repitsch","doi":"10.1027/1614-2241/A000035","DOIUrl":"https://doi.org/10.1027/1614-2241/A000035","url":null,"abstract":"Various adaptive procedures for efficiently assessing the knowledge state of an individual have been developed within the theory of knowledge structures. These procedures set out to draw a detailed picture of an individual’s knowledge in a certain field by posing a minimal number of questions. While research so far mostly emphasized theoretical issues, the present paper focuses on an empirical evaluation of probabilistic assessment. It reports on simulation data showing that both efficiency and accuracy of the assessment exhibit considerable sensitivity to the choice of parameters and prior information as captured by the initial likelihood of the knowledge states. In order to deal with problems that arise from incorrect prior information, an extension of the probabilistic assessment is proposed. Systematic simulations provide evidence for the efficiency and robustness of the proposed extension, as well as its feasibility in terms of computational costs.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"8 1","pages":"12-22"},"PeriodicalIF":3.1,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
The Impact of Controlling for Extreme Responding on Measurement Equivalence in Cross-Cultural Research 跨文化研究中极端反应控制对测量等值的影响
IF 3.1 3区 心理学 Q2 PSYCHOLOGY, MATHEMATICAL Pub Date : 2012-01-01 DOI: 10.1027/1614-2241/A000048
M. Morren, J. Gelissen, J. Vermunt
Prior research has shown that extreme response style can seriously bias responses to survey questions and that this response style may differ across culturally diverse groups. Consequently, cross-cultural differences in extreme responding may yield incomparable responses when not controlled for. To examine how extreme responding affects the cross-cultural comparability of survey responses, we propose and apply a multiple-group latent class approach where groups are compared on basis of the factor loadings, intercepts, and factor means in a Latent Class Factor Model. In this approach a latent factor measuring the response style is explicitly included as an explanation for group differences found in the data. Findings from two empirical applications that examine the cross-cultural comparability of measurements show that group differences in responding import inequivalence in measurements among groups. Controlling for the response style yields more equivalent measurements. This finding emphasizes the importa...
先前的研究表明,极端的反应方式会严重影响对调查问题的反应,而且这种反应方式在不同文化的群体中可能会有所不同。因此,如果不加以控制,极端反应的跨文化差异可能会产生无与伦比的反应。为了研究极端反应如何影响调查反应的跨文化可比性,我们提出并应用了一种多组潜在类别方法,其中根据潜在类别因素模型中的因素负荷、截点和因素均值对组进行比较。在这种方法中,测量反应风格的潜在因素被明确地包括在内,作为对数据中发现的群体差异的解释。两项实证应用检验了测量的跨文化可比性,结果表明,群体在应对群体间测量的进口不平等方面存在差异。控制响应样式会产生更多等效的测量结果。这一发现强调了……
{"title":"The Impact of Controlling for Extreme Responding on Measurement Equivalence in Cross-Cultural Research","authors":"M. Morren, J. Gelissen, J. Vermunt","doi":"10.1027/1614-2241/A000048","DOIUrl":"https://doi.org/10.1027/1614-2241/A000048","url":null,"abstract":"Prior research has shown that extreme response style can seriously bias responses to survey questions and that this response style may differ across culturally diverse groups. Consequently, cross-cultural differences in extreme responding may yield incomparable responses when not controlled for. To examine how extreme responding affects the cross-cultural comparability of survey responses, we propose and apply a multiple-group latent class approach where groups are compared on basis of the factor loadings, intercepts, and factor means in a Latent Class Factor Model. In this approach a latent factor measuring the response style is explicitly included as an explanation for group differences found in the data. Findings from two empirical applications that examine the cross-cultural comparability of measurements show that group differences in responding import inequivalence in measurements among groups. Controlling for the response style yields more equivalent measurements. This finding emphasizes the importa...","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"8 1","pages":"159-170"},"PeriodicalIF":3.1,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
期刊
Methodology: European Journal of Research Methods for The Behavioral and Social Sciences
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1