{"title":"聚焦于教学法:书面论证的QR评分标准","authors":"Ruby Daniels, Kathryn Appenzeller Knowles, Emily Naasz, Amanda Lindner","doi":"10.5038/1936-4660.16.1.1431","DOIUrl":null,"url":null,"abstract":"Institutional assessments of quantitative literacy/reasoning (QL/QR) have been extensively tested and reported in the literature. While appropriate for measuring student learning at the programmatic or institutional level, such instruments were not designed for classroom grading. After modifying a widely accepted institutional rubric designed to assess QR in written arguments, the current mixed method study tested the reliability of two QR analytic grading rubrics for written arguments and explored students’ reactions to the grading tools. Undergraduate students enrolled in a business course (N = 59) participated. A total of 415 QR artifacts from 40 students were assessed; an additional 19 students provided feedback about the grading tools. A new QR writing rubric included three main criteria (numerical evidence, conclusions, and writing), while a second rubric added a fourth criterion for assignments with data visualization. After two coders rated students’ QR assignments, data analysis found both new QR rubrics had good reliability. Cohen’s kappa found the study’s raters had substantial agreement on all rubric criteria (κ = 0.69 to 0.80). Both the QR writing (α = 0.861) and data visualization (α = 0.859) grading rubrics also had good internal consistency. When asked to provide feedback about the new grading tools, 89% of students shared positive comments, reporting the rubrics clarified assignment expectations, improved their performance, and facilitated the writing process. This paper proposes slight modifications to the phrasing of the new rubrics’ writing criterion, discusses best practices for use of rubrics in QR classrooms, and recommends future research.","PeriodicalId":36166,"journal":{"name":"Numeracy","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Focused on Pedagogy: QR Grading Rubrics for Written Arguments\",\"authors\":\"Ruby Daniels, Kathryn Appenzeller Knowles, Emily Naasz, Amanda Lindner\",\"doi\":\"10.5038/1936-4660.16.1.1431\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Institutional assessments of quantitative literacy/reasoning (QL/QR) have been extensively tested and reported in the literature. While appropriate for measuring student learning at the programmatic or institutional level, such instruments were not designed for classroom grading. After modifying a widely accepted institutional rubric designed to assess QR in written arguments, the current mixed method study tested the reliability of two QR analytic grading rubrics for written arguments and explored students’ reactions to the grading tools. Undergraduate students enrolled in a business course (N = 59) participated. A total of 415 QR artifacts from 40 students were assessed; an additional 19 students provided feedback about the grading tools. A new QR writing rubric included three main criteria (numerical evidence, conclusions, and writing), while a second rubric added a fourth criterion for assignments with data visualization. After two coders rated students’ QR assignments, data analysis found both new QR rubrics had good reliability. Cohen’s kappa found the study’s raters had substantial agreement on all rubric criteria (κ = 0.69 to 0.80). Both the QR writing (α = 0.861) and data visualization (α = 0.859) grading rubrics also had good internal consistency. When asked to provide feedback about the new grading tools, 89% of students shared positive comments, reporting the rubrics clarified assignment expectations, improved their performance, and facilitated the writing process. This paper proposes slight modifications to the phrasing of the new rubrics’ writing criterion, discusses best practices for use of rubrics in QR classrooms, and recommends future research.\",\"PeriodicalId\":36166,\"journal\":{\"name\":\"Numeracy\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Numeracy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5038/1936-4660.16.1.1431\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Mathematics\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Numeracy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5038/1936-4660.16.1.1431","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Mathematics","Score":null,"Total":0}
Focused on Pedagogy: QR Grading Rubrics for Written Arguments
Institutional assessments of quantitative literacy/reasoning (QL/QR) have been extensively tested and reported in the literature. While appropriate for measuring student learning at the programmatic or institutional level, such instruments were not designed for classroom grading. After modifying a widely accepted institutional rubric designed to assess QR in written arguments, the current mixed method study tested the reliability of two QR analytic grading rubrics for written arguments and explored students’ reactions to the grading tools. Undergraduate students enrolled in a business course (N = 59) participated. A total of 415 QR artifacts from 40 students were assessed; an additional 19 students provided feedback about the grading tools. A new QR writing rubric included three main criteria (numerical evidence, conclusions, and writing), while a second rubric added a fourth criterion for assignments with data visualization. After two coders rated students’ QR assignments, data analysis found both new QR rubrics had good reliability. Cohen’s kappa found the study’s raters had substantial agreement on all rubric criteria (κ = 0.69 to 0.80). Both the QR writing (α = 0.861) and data visualization (α = 0.859) grading rubrics also had good internal consistency. When asked to provide feedback about the new grading tools, 89% of students shared positive comments, reporting the rubrics clarified assignment expectations, improved their performance, and facilitated the writing process. This paper proposes slight modifications to the phrasing of the new rubrics’ writing criterion, discusses best practices for use of rubrics in QR classrooms, and recommends future research.