首页 > 最新文献

Practical Assessment, Research and Evaluation最新文献

英文 中文
A State Level Analysis of the Marzano Teacher Evaluation Model: Predicting Teacher Value-Added Measures with Observation Scores. Marzano教师评价模型的国家层面分析:用观察分数预测教师增值措施。
Q2 Social Sciences Pub Date : 2019-07-01 DOI: 10.7275/CC5B-6J43
Lindsey Devers Basileo, Michael Toth
{"title":"A State Level Analysis of the Marzano Teacher Evaluation Model: Predicting Teacher Value-Added Measures with Observation Scores.","authors":"Lindsey Devers Basileo, Michael Toth","doi":"10.7275/CC5B-6J43","DOIUrl":"https://doi.org/10.7275/CC5B-6J43","url":null,"abstract":"","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84624271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Generalizability Theory in R R中的泛化理论
Q2 Social Sciences Pub Date : 2019-07-01 DOI: 10.7275/5065-GC10
Alan Huebner, Marissa Lucht
{"title":"Generalizability Theory in R","authors":"Alan Huebner, Marissa Lucht","doi":"10.7275/5065-GC10","DOIUrl":"https://doi.org/10.7275/5065-GC10","url":null,"abstract":"","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74436594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Overview and Illustration of Bayesian Confirmatory Factor Analysis with Ordinal Indicators 序数指标贝叶斯验证性因子分析综述与说明
Q2 Social Sciences Pub Date : 2019-05-01 DOI: 10.7275/VK6G-0075
John M Taylor
{"title":"Overview and Illustration of Bayesian Confirmatory Factor Analysis with Ordinal Indicators","authors":"John M Taylor","doi":"10.7275/VK6G-0075","DOIUrl":"https://doi.org/10.7275/VK6G-0075","url":null,"abstract":"","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77457190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Causal Inference Methods for Selection on Observed and Unobserved Factors: Propensity Score Matching, Heckit Models, and Instrumental Variable Estimation. 选择观察和未观察因素的因果推理方法:倾向得分匹配,Heckit模型和工具变量估计。
Q2 Social Sciences Pub Date : 2019-04-01 DOI: 10.7275/7tgr-xt91
P. Scott
Two approaches to causal inference in the presence of non-random assignment are presented: The Propensity Score approach which pseudo-randomizes by balancing groups on observed propensity to be in treatment, and the Endogenous Treatment Effects approach which utilizes systems of equations to explicitly model selection into treatment. The three methods based on these approaches that are compared in this study are Heckit models, Propensity Score Matching, and Instrumental Variable models. A simulation is presented to demonstrate these models under different specifications of selection observables, selection unobservables, and outcome unobservables in terms of bias in average treatment effect estimates and size of standard errors. Results show that in most cases Heckit models produce the least bias and highest standard errors in average treatment effect estimates. Propensity Score Matching produces the least bias when selection observables are mildly correlated with selection unobservables and outcome unobservables with outcome and selection unobservables being uncorrelated. Instrumental Variable Estimation produces the least bias in two cases: (1) when selection unobservables are correlated with both selection observables and outcome unobservables, while selection observables are unrelated to outcome unobservables; (2) when there are no relations between selection observables, selection unobservables, and outcome unobservables.
在存在非随机分配的情况下,提出了两种因果推理方法:倾向得分方法通过平衡观察到的治疗倾向来进行伪随机化,以及内源性治疗效应方法,该方法利用方程系统明确地为治疗选择建模。本研究比较了基于这些方法的三种方法:Heckit模型、倾向得分匹配模型和工具变量模型。在平均治疗效果估计偏差和标准误差大小方面,提出了一个模拟来证明这些模型在不同规格下的选择可观察性、选择不可观察性和结果不可观察性。结果表明,在大多数情况下,Heckit模型在平均治疗效果估计中产生最小的偏差和最高的标准误差。当选择观察值与选择不可观察值和结果不可观察值轻度相关,结果和选择不可观察值不相关时,倾向评分匹配产生的偏差最小。工具变量估计在两种情况下产生最小的偏差:(1)选择不可观测值与选择不可观测值和结果不可观测值都相关,而选择可观测值与结果不可观测值无关;(2)当选择可观测物、选择不可观测物和结果不可观测物之间没有关系时。
{"title":"Causal Inference Methods for Selection on Observed and Unobserved Factors: Propensity Score Matching, Heckit Models, and Instrumental Variable Estimation.","authors":"P. Scott","doi":"10.7275/7tgr-xt91","DOIUrl":"https://doi.org/10.7275/7tgr-xt91","url":null,"abstract":"Two approaches to causal inference in the presence of non-random assignment are presented: The Propensity Score approach which pseudo-randomizes by balancing groups on observed propensity to be in treatment, and the Endogenous Treatment Effects approach which utilizes systems of equations to explicitly model selection into treatment. The three methods based on these approaches that are compared in this study are Heckit models, Propensity Score Matching, and Instrumental Variable models. A simulation is presented to demonstrate these models under different specifications of selection observables, selection unobservables, and outcome unobservables in terms of bias in average treatment effect estimates and size of standard errors. Results show that in most cases Heckit models produce the least bias and highest standard errors in average treatment effect estimates. Propensity Score Matching produces the least bias when selection observables are mildly correlated with selection unobservables and outcome unobservables with outcome and selection unobservables being uncorrelated. Instrumental Variable Estimation produces the least bias in two cases: (1) when selection unobservables are correlated with both selection observables and outcome unobservables, while selection observables are unrelated to outcome unobservables; (2) when there are no relations between selection observables, selection unobservables, and outcome unobservables.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72534936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Determining Item Screening Criteria Using Cost-Benefit Analysis. 使用成本效益分析确定项目筛选标准。
Q2 Social Sciences Pub Date : 2019-04-01 DOI: 10.7275/XSQM-8839
Bozhidar M. Bashkov, Jerome C. Clauser
{"title":"Determining Item Screening Criteria Using Cost-Benefit Analysis.","authors":"Bozhidar M. Bashkov, Jerome C. Clauser","doi":"10.7275/XSQM-8839","DOIUrl":"https://doi.org/10.7275/XSQM-8839","url":null,"abstract":"","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85263060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Plot for the Visualization of Missing Value Patterns in Multivariate Data 多变量数据中缺失值模式可视化的绘图
Q2 Social Sciences Pub Date : 2019-01-01 DOI: 10.7275/94RA-1Y55
P. Valero-Mora, María F. Rodrigo, M. Sanchez, J. Sanmartín
{"title":"A Plot for the Visualization of Missing Value Patterns in Multivariate Data","authors":"P. Valero-Mora, María F. Rodrigo, M. Sanchez, J. Sanmartín","doi":"10.7275/94RA-1Y55","DOIUrl":"https://doi.org/10.7275/94RA-1Y55","url":null,"abstract":"","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77754664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Using Rater Cognition to Improve Generalizability of an Assessment of Scientific Argumentation 利用评价认知提高科学论证评价的概括性
Q2 Social Sciences Pub Date : 2019-01-01 DOI: 10.7275/EY9D-P954
Katrina Borowiec, Courtney Castle
Rater cognition or “think-aloud” studies have historically been used to enhance rater accuracy and consistency in writing and language assessments. As assessments are developed for new, complex constructs from the Next Generation Science Standards (NGSS) , the present study illustrates the utility of extending “think-aloud” studies to science assessment. The study focuses on the development of rubrics for scientific argumentation, one of the NGSS Science and Engineering practices. The initial rubrics were modified based on cognitive interviews with five raters. Next, a group of four new raters scored responses using the original and revised rubrics. A psychometric analysis was conducted to measure change in interrater reliability, accuracy, and generalizability (using a generalizability study or “g-study”) for the original and revised rubrics. Interrater reliability, accuracy, and generalizability increased with the rubric modifications. Furthermore, follow-up interviews with the second group of raters indicated that most raters preferred the revised rubric. These findings illustrate that cognitive interviews with raters can be used to enhance rubric usability and generalizability when assessing scientific argumentation, thereby improving assessment validity.
评分认知或“有声思考”研究历来被用于提高写作和语言评估的准确性和一致性。由于评估是针对下一代科学标准(NGSS)中新的复杂结构开发的,本研究说明了将“有声思考”研究扩展到科学评估的效用。该研究的重点是科学论证规则的发展,这是NGSS科学与工程实践之一。最初的标准是根据与五位评分者的认知访谈进行修改的。接下来,一组四名新的评分员使用原始和修订后的标准对回答进行评分。对原标准和修订后的标准进行了心理测量分析,以测量量表间信度、准确性和概括性的变化(使用概括性研究或“g研究”)。随着分类的修改,分类间的可靠性、准确性和通用性都有所提高。此外,对第二组评分人的后续访谈表明,大多数评分人更喜欢修订后的评分标准。这些研究结果表明,在评估科学论证时,对评分者进行认知访谈可以增强标题的可用性和概括性,从而提高评估的效度。
{"title":"Using Rater Cognition to Improve Generalizability of an Assessment of Scientific Argumentation","authors":"Katrina Borowiec, Courtney Castle","doi":"10.7275/EY9D-P954","DOIUrl":"https://doi.org/10.7275/EY9D-P954","url":null,"abstract":"Rater cognition or “think-aloud” studies have historically been used to enhance rater accuracy and consistency in writing and language assessments. As assessments are developed for new, complex constructs from the Next Generation Science Standards (NGSS) , the present study illustrates the utility of extending “think-aloud” studies to science assessment. The study focuses on the development of rubrics for scientific argumentation, one of the NGSS Science and Engineering practices. The initial rubrics were modified based on cognitive interviews with five raters. Next, a group of four new raters scored responses using the original and revised rubrics. A psychometric analysis was conducted to measure change in interrater reliability, accuracy, and generalizability (using a generalizability study or “g-study”) for the original and revised rubrics. Interrater reliability, accuracy, and generalizability increased with the rubric modifications. Furthermore, follow-up interviews with the second group of raters indicated that most raters preferred the revised rubric. These findings illustrate that cognitive interviews with raters can be used to enhance rubric usability and generalizability when assessing scientific argumentation, thereby improving assessment validity.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78995629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Addressing the shortcomings of traditional multiple-choice tests: subset selection without mark deductions 解决传统选择题的缺点:不扣分的子集选择
Q2 Social Sciences Pub Date : 2018-12-21 DOI: 10.7275/HQ8A-F262
Lucia Otoyo, M. Bush
This article presents the results of an empirical study of “subset selection” tests, which are a generalisation of traditional multiple-choice tests in which test takers are able to express partial knowledge. Similar previous studies have mostly been supportive of subset selection, but the deduction of marks for incorrect responses has been a cause for concern. For the present study, a novel marking scheme based on Akeroyd’s “dual response system” was used instead. In Akeroyd’s system, which assumes that every question has four answer options, test takers are able to split their single 100% bet on one answer option into two 50% bets by selecting two options, or into four 25% bets by selecting no options. To achieve full subset selection, this idea was extended so that test takers could also split their 100% bet equally between three options. The results indicate increased test reliability (in the sense of measurement consistency), and also increased satisfaction on the part of the test takers. Furthermore, since the novel marking scheme does not in principle lead to either inflated or deflated marks, this makes it easy for educators who currently use traditional multiple-choice tests to switch to using subset selection tests.
本文介绍了“子集选择”测试的实证研究结果,这是传统选择题测试的推广,在选择题测试中,考生能够表达部分知识。先前的类似研究大多支持子集选择,但对错误回答的扣分一直令人担忧。在本研究中,采用了一种基于阿克罗伊德的“双重反应系统”的新评分方案。在阿克罗伊德的系统中,假设每个问题都有四个答案选项,考生可以通过选择两个选项将他们对一个答案的100%赌注分成两个50%的赌注,或者通过不选择选项分成四个25%的赌注。为了实现完整的子集选择,这个想法得到了扩展,以便考生也可以在三个选项之间平均分配100%的赌注。结果表明,提高了测试的可靠性(在测量一致性的意义上),也提高了部分考生的满意度。此外,由于新的评分方案原则上不会导致分数膨胀或缩小,这使得目前使用传统选择题测试的教育工作者很容易转向使用子集选择测试。
{"title":"Addressing the shortcomings of traditional multiple-choice tests: subset selection without mark deductions","authors":"Lucia Otoyo, M. Bush","doi":"10.7275/HQ8A-F262","DOIUrl":"https://doi.org/10.7275/HQ8A-F262","url":null,"abstract":"This article presents the results of an empirical study of “subset selection” tests, which are a generalisation of traditional multiple-choice tests in which test takers are able to express partial knowledge. Similar previous studies have mostly been supportive of subset selection, but the deduction of marks for incorrect responses has been a cause for concern. For the present study, a novel marking scheme based on Akeroyd’s “dual response system” was used instead. In Akeroyd’s system, which assumes that every question has four answer options, test takers are able to split their single 100% bet on one answer option into two 50% bets by selecting two options, or into four 25% bets by selecting no options. To achieve full subset selection, this idea was extended so that test takers could also split their 100% bet equally between three options. \u0000 \u0000The results indicate increased test reliability (in the sense of measurement consistency), and also increased satisfaction on the part of the test takers. Furthermore, since the novel marking scheme does not in principle lead to either inflated or deflated marks, this makes it easy for educators who currently use traditional multiple-choice tests to switch to using subset selection tests.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83108558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The Ensemble and Model Comparison Approaches for Big Data Analytics in Social Sciences. 社会科学大数据分析的集成与模型比较方法
Q2 Social Sciences Pub Date : 2018-11-01 DOI: 10.7275/CHAW-Y360
Chong Ho Alex Yu, Hyun Seo Lee, Emily Lara, Siyan Gan
Big data analytics are prevalent in fields like business, engineering, public health, and the physical sciences, but social scientists are slower than their peers in other fields in adopting this new methodology. One major reason for this is that traditional statistical procedures are typically not suitable for the analysis of large and complex data sets. Although data mining techniques could alleviate this problem, it is often unclear to social science researchers which option is the most suitable one to a particular research problem. The main objective of this paper is to illustrate how the model comparison of two popular ensemble methods, namely, boosting and bagging, could yield an improved explanatory model.
大数据分析在商业、工程、公共卫生和物理科学等领域很流行,但社会科学家在采用这种新方法方面比其他领域的同行要慢。其中一个主要原因是传统的统计程序通常不适合分析大型和复杂的数据集。虽然数据挖掘技术可以缓解这一问题,但对于社会科学研究人员来说,哪种选择最适合特定的研究问题往往是不清楚的。本文的主要目的是说明两种流行的集成方法的模型比较,即助推和套袋,如何产生一个改进的解释模型。
{"title":"The Ensemble and Model Comparison Approaches for Big Data Analytics in Social Sciences.","authors":"Chong Ho Alex Yu, Hyun Seo Lee, Emily Lara, Siyan Gan","doi":"10.7275/CHAW-Y360","DOIUrl":"https://doi.org/10.7275/CHAW-Y360","url":null,"abstract":"Big data analytics are prevalent in fields like business, engineering, public health, and the physical sciences, but social scientists are slower than their peers in other fields in adopting this new methodology. One major reason for this is that traditional statistical procedures are typically not suitable for the analysis of large and complex data sets. Although data mining techniques could alleviate this problem, it is often unclear to social science researchers which option is the most suitable one to a particular research problem. The main objective of this paper is to illustrate how the model comparison of two popular ensemble methods, namely, boosting and bagging, could yield an improved explanatory model.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90998612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An Effective Rubric Norming Process. 一个有效的规则规范过程。
Q2 Social Sciences Pub Date : 2018-09-01 DOI: 10.7275/ERF8-CA22
K. Schoepp, M. Danaher, A. A. Kranov
Within higher education, rubric use is expanding. Whereas some years ago the topic of rubrics may have been of interest only to faculty in colleges of education, in recent years the focus on teaching and learning and the emphasis from accrediting bodies has elevated the importance of rubrics across disciplines and different types of assessment. One of the key aspects to successful implementation of a shared rubric is the process known as norming, calibrating, or moderating rubrics, an oft-neglected area in rubric literature. Norming should be a collaborative process built around knowledge of the rubric and meaningful discussion leading to evidence-driven consensus, but actual examples of norming are rarely available to university faculty. This paper describes the steps involved in a successful consensus-driven norming process in higher education using one particular rubric, the Computing Professional Skills Assessment (CPSA). The steps are: 1) document preparation; 2) rubric review; 3) initial reading and scoring of one learning outcome; 4) initial sharing/recording of results; 5) initial consensus development and adjusting of results; 6) initial reading and scoring of remaining learning outcomes; 7) reading and scoring of remaining transcripts; 8) sharing/recording results; 9) development of consensus and adjusting of results. This norming process, though used for the CPSA, is transferable to other rubrics where faculty have come together to collaborate on grading a shared assignment. It is most appropriate for higher education where, more often than not, faculty independence requires consensus over directive.
在高等教育中,红字的使用正在扩大。几年前,可能只有教育学院的教师才对规则感兴趣,但近年来,对教与学的关注以及认证机构的重视,提高了规则在跨学科和不同类型评估中的重要性。成功实现共享规则的关键方面之一是规范、校准或调节规则的过程,这是规则文献中经常被忽视的领域。规范应该是一个协作的过程,建立在对规则的了解和有意义的讨论的基础上,从而导致证据驱动的共识,但是规范的实际例子很少能在大学教师中得到。本文描述了在高等教育中成功的共识驱动的规范过程所涉及的步骤,使用一个特定的标题,计算专业技能评估(CPSA)。步骤是:1)文件准备;2)标题审查;3)对一个学习成果进行初步阅读和评分;4)初步分享/记录结果;5)初步共识的形成和结果的调整;6)初始阅读和剩余学习成果评分;7)剩余成绩单的阅读和评分;8)分享/记录结果;9)形成共识和调整结果。这个规范过程,虽然用于CPSA,但可以转移到其他教师聚集在一起合作评分共享作业的标准中。它最适合高等教育,因为在高等教育中,教师独立往往需要共识而不是指令。
{"title":"An Effective Rubric Norming Process.","authors":"K. Schoepp, M. Danaher, A. A. Kranov","doi":"10.7275/ERF8-CA22","DOIUrl":"https://doi.org/10.7275/ERF8-CA22","url":null,"abstract":"Within higher education, rubric use is expanding. Whereas some years ago the topic of rubrics may have been of interest only to faculty in colleges of education, in recent years the focus on teaching and learning and the emphasis from accrediting bodies has elevated the importance of rubrics across disciplines and different types of assessment. One of the key aspects to successful implementation of a shared rubric is the process known as norming, calibrating, or moderating rubrics, an oft-neglected area in rubric literature. Norming should be a collaborative process built around knowledge of the rubric and meaningful discussion leading to evidence-driven consensus, but actual examples of norming are rarely available to university faculty. This paper describes the steps involved in a successful consensus-driven norming process in higher education using one particular rubric, the Computing Professional Skills Assessment (CPSA). The steps are: 1) document preparation; 2) rubric review; 3) initial reading and scoring of one learning outcome; 4) initial sharing/recording of results; 5) initial consensus development and adjusting of results; 6) initial reading and scoring of remaining learning outcomes; 7) reading and scoring of remaining transcripts; 8) sharing/recording results; 9) development of consensus and adjusting of results. This norming process, though used for the CPSA, is transferable to other rubrics where faculty have come together to collaborate on grading a shared assignment. It is most appropriate for higher education where, more often than not, faculty independence requires consensus over directive.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74282885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
Practical Assessment, Research and Evaluation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1