Pub Date : 2020-07-23DOI: 10.5325/jasseinsteffe.9.1-2.0049
James E. Johnson, James A. Jones, T. Weidner, Allison K. Manwell
abstract:Part I of this project (Johnson, Weidner, Jones, & Manwell, 2018) confirmed the definition of course rigor, as well as the development of questions used to assess rigor. This paper, part II of this project, assessed the student ratings rigor questions to investigate course rigor relative to instructor ratings, course ratings, course grades, enrollment, and course level. A total of 203 courses (2,720 students) participated during a three-year period. Results indicated that course rigor is strongly related to instructor and course ratings, but minimally to course grades. Lower-level courses also were found to have significantly lower rigor than upper-level courses. These results contradict the theory of retributional bias and suggest that faculty are more likely to receive high student ratings if perceived rigor is high. This study also provides a foundation from which course rigor can be further evaluated in different academic contexts.
{"title":"Evaluating Academic Rigor, Part II: An Investigation of Student Ratings, Course Grades, and Course Level","authors":"James E. Johnson, James A. Jones, T. Weidner, Allison K. Manwell","doi":"10.5325/jasseinsteffe.9.1-2.0049","DOIUrl":"https://doi.org/10.5325/jasseinsteffe.9.1-2.0049","url":null,"abstract":"abstract:Part I of this project (Johnson, Weidner, Jones, & Manwell, 2018) confirmed the definition of course rigor, as well as the development of questions used to assess rigor. This paper, part II of this project, assessed the student ratings rigor questions to investigate course rigor relative to instructor ratings, course ratings, course grades, enrollment, and course level. A total of 203 courses (2,720 students) participated during a three-year period. Results indicated that course rigor is strongly related to instructor and course ratings, but minimally to course grades. Lower-level courses also were found to have significantly lower rigor than upper-level courses. These results contradict the theory of retributional bias and suggest that faculty are more likely to receive high student ratings if perceived rigor is high. This study also provides a foundation from which course rigor can be further evaluated in different academic contexts.","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"35 1","pages":"49 - 78"},"PeriodicalIF":0.0,"publicationDate":"2020-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73979059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-23DOI: 10.5325/jasseinsteffe.9.1-2.0029
Mark C. Nicholas, Barbara C. Storandt, E. Atwood
abstract:This article empirically examines three assumptions that emerged from the literature on using classroom assignments for institutional assessment. The potential misalignment between the source of evidence (classroom assignments) and the assessment method (institutional rubric) is a serious threat to validity when using course-embedded assessment models. Findings revealed that approaches for faculty development in assignment design were drawing from approaches designed for using assignments in the classroom without an examination of implications for institutional assessment. Findings can inform the practice of individual faculty, approaches used for professional development in assignment design, and the movement for accountability focused on using course-embedded assignments.
{"title":"Reexamining Three Held Assumptions about Creating Classroom Assignments That Can Be Used for Institutional Assessment","authors":"Mark C. Nicholas, Barbara C. Storandt, E. Atwood","doi":"10.5325/jasseinsteffe.9.1-2.0029","DOIUrl":"https://doi.org/10.5325/jasseinsteffe.9.1-2.0029","url":null,"abstract":"abstract:This article empirically examines three assumptions that emerged from the literature on using classroom assignments for institutional assessment. The potential misalignment between the source of evidence (classroom assignments) and the assessment method (institutional rubric) is a serious threat to validity when using course-embedded assessment models. Findings revealed that approaches for faculty development in assignment design were drawing from approaches designed for using assignments in the classroom without an examination of implications for institutional assessment. Findings can inform the practice of individual faculty, approaches used for professional development in assignment design, and the movement for accountability focused on using course-embedded assignments.","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"5 1","pages":"29 - 48"},"PeriodicalIF":0.0,"publicationDate":"2020-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91039514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.5325/jasseinsteffe.10.1-2.0001
Willis L. Boughton
abstract:Custom software was used to collect data for 416 students over multiple semesters of a college introductory computer transfer course. The objective was to quantitatively identify achievement factors and corresponding actions to be taken to improve student achievement. Student activity data were exam review time, time spent on assignments, attendance, and Student Response System (SRS) use. Data on specific student skills were obtained from answers to assessment questions. Regression analysis shows that attendance, SRS use, time spent on assignments, and time spent on exam reviews do not significantly affect achievement. Score on assessment questions requiring basic math skill and score on questions requiring skill in observation and written explanation of classroom demonstrations do significantly affect achievement. So does score on questions repeated from one exam to the next, but contrary to intuition, the overall effect of these repeated questions is to lower student achievement. A regression model using only these three factors plus percentage of assignments done predicts student achievement to within half a letter grade. Improving student skill in basic math provides the greatest opportunity to improve student achievement. Successful and unsuccessful students are exclusive groups; unsuccessful students are not "partially" successful.
{"title":"Student Achievement Factors in a College Introductory Computer Course","authors":"Willis L. Boughton","doi":"10.5325/jasseinsteffe.10.1-2.0001","DOIUrl":"https://doi.org/10.5325/jasseinsteffe.10.1-2.0001","url":null,"abstract":"abstract:Custom software was used to collect data for 416 students over multiple semesters of a college introductory computer transfer course. The objective was to quantitatively identify achievement factors and corresponding actions to be taken to improve student achievement. Student activity data were exam review time, time spent on assignments, attendance, and Student Response System (SRS) use. Data on specific student skills were obtained from answers to assessment questions. Regression analysis shows that attendance, SRS use, time spent on assignments, and time spent on exam reviews do not significantly affect achievement. Score on assessment questions requiring basic math skill and score on questions requiring skill in observation and written explanation of classroom demonstrations do significantly affect achievement. So does score on questions repeated from one exam to the next, but contrary to intuition, the overall effect of these repeated questions is to lower student achievement. A regression model using only these three factors plus percentage of assignments done predicts student achievement to within half a letter grade. Improving student skill in basic math provides the greatest opportunity to improve student achievement. Successful and unsuccessful students are exclusive groups; unsuccessful students are not \"partially\" successful.","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"6 1","pages":"1 - 32"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89948698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.5325/jasseinsteffe.10.1-2.0114
Marcy L. Brown
{"title":"Teaching Students About the World of Work: A Challenge to Postsecondary Educators ed. by Nancy Hoffman and Michael Lawrence Collins (review)","authors":"Marcy L. Brown","doi":"10.5325/jasseinsteffe.10.1-2.0114","DOIUrl":"https://doi.org/10.5325/jasseinsteffe.10.1-2.0114","url":null,"abstract":"","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"62 1","pages":"114 - 116"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88830825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.5325/jasseinsteffe.10.1-2.0033
J. Henderson, Scott C. Marley, M. Wilcox, Natalie Nailor, Stephanie Sowl, Kevin Close
abstract:The lack of a precise definition for critical thinking makes it difficult for educators to agree on specifically how critical thinking should be assessed. This study assesses the efficacy of an innovative university initiative designed to promote critical thinking through project-based learning (PBL). Over 400 students participated in a 2 × 2 factorial experiment including critical thinking items from both a widely used inventory as well as a new assessment designed to assess critical thinking in a more practical fashion. The authors' novel critical thinking assessment breaks down critical thinking into construction and critique components, with results indicating that critique is more challenging for students, regardless of experimental condition. However, students specifically prompted for critique demonstrated more attempts at critique than their counterparts who did not receive a critique prompt. The results indicate a paucity of critical thought in general, suggesting multiple challenges for both the teaching and assessment of critical thinking skills.
{"title":"Challenges Assessing the Impact of Project-Based Learning on Critical Thinking Skills","authors":"J. Henderson, Scott C. Marley, M. Wilcox, Natalie Nailor, Stephanie Sowl, Kevin Close","doi":"10.5325/jasseinsteffe.10.1-2.0033","DOIUrl":"https://doi.org/10.5325/jasseinsteffe.10.1-2.0033","url":null,"abstract":"abstract:The lack of a precise definition for critical thinking makes it difficult for educators to agree on specifically how critical thinking should be assessed. This study assesses the efficacy of an innovative university initiative designed to promote critical thinking through project-based learning (PBL). Over 400 students participated in a 2 × 2 factorial experiment including critical thinking items from both a widely used inventory as well as a new assessment designed to assess critical thinking in a more practical fashion. The authors' novel critical thinking assessment breaks down critical thinking into construction and critique components, with results indicating that critique is more challenging for students, regardless of experimental condition. However, students specifically prompted for critique demonstrated more attempts at critique than their counterparts who did not receive a critique prompt. The results indicate a paucity of critical thought in general, suggesting multiple challenges for both the teaching and assessment of critical thinking skills.","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"75 1","pages":"33 - 60"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86357543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.5325/jasseinsteffe.10.1-2.0112
K. Daugherty
{"title":"Formative Assessment in the Disciplines: Framing a Continuum of Professional Learning by Margaret Heritage and E. Caroline Wylie (review)","authors":"K. Daugherty","doi":"10.5325/jasseinsteffe.10.1-2.0112","DOIUrl":"https://doi.org/10.5325/jasseinsteffe.10.1-2.0112","url":null,"abstract":"","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"21 1","pages":"112 - 114"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84933571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.5325/jasseinsteffe.10.1-2.0061
Jihee Hwang, Felix Wao
abstract:Institutional surveys are an important means for assessing student learning experiences and outcomes in higher education. With the widespread ownership of smartphones and tablets, a growing number of students use mobile devices to complete institutional surveys. Using National Survey of Student Engagement data collected at a large four-year research university, this study examines how survey response patterns and data quality are different between computer (i.e., laptop, desktop) and mobile device responses. The findings indicate that mobile respondents are likely to take a longer time to complete the survey and have higher item nonresponse rates. In examining engagement indicator subscales, first-year students who used mobile devices reported significantly lower internal consistency reliability of all measures in academic challenges compared to computer respondents. Additionally, controlling for student demographics and precollege traits, the adjusted means of academic challenges and supportive environment subscales were significantly lower for mobile device respondents from first-year students.
{"title":"Comparison of NSSE Data Obtained via Computer Versus Mobile Devices","authors":"Jihee Hwang, Felix Wao","doi":"10.5325/jasseinsteffe.10.1-2.0061","DOIUrl":"https://doi.org/10.5325/jasseinsteffe.10.1-2.0061","url":null,"abstract":"abstract:Institutional surveys are an important means for assessing student learning experiences and outcomes in higher education. With the widespread ownership of smartphones and tablets, a growing number of students use mobile devices to complete institutional surveys. Using National Survey of Student Engagement data collected at a large four-year research university, this study examines how survey response patterns and data quality are different between computer (i.e., laptop, desktop) and mobile device responses. The findings indicate that mobile respondents are likely to take a longer time to complete the survey and have higher item nonresponse rates. In examining engagement indicator subscales, first-year students who used mobile devices reported significantly lower internal consistency reliability of all measures in academic challenges compared to computer respondents. Additionally, controlling for student demographics and precollege traits, the adjusted means of academic challenges and supportive environment subscales were significantly lower for mobile device respondents from first-year students.","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"308 1","pages":"61 - 84"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77206864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.5325/jasseinsteffe.10.1-2.0085
D. Eubanks, A. Good, Megan Schramm-Possinger
abstract:This study analyzes the reliability of approximately 800,000 college grades from three higher educational institutions that vary in type and size. Comparisons of intraclass correlation coefficients (ICCs) reveal patterns among institutions and academic disciplines. Results from this study suggest that there are styles of grading associated with academic disciplines. Individual grade assignment ICC is comparable to rubric-derived learning assessments at one institution, and both are arguably too low to be used for decision making at that level. A reliability lift calculation suggests that grade averages over eight (or so) courses per student have enough reliability to be used as outcome measures. We discuss how grade statistics can complement efforts to assess program fairness, rigor, and comparability, as well as assessing the complexity of a curriculum. The R code and statistical notes are included to facilitate use by assessment and institutional research offices.
{"title":"Course Grade Reliability","authors":"D. Eubanks, A. Good, Megan Schramm-Possinger","doi":"10.5325/jasseinsteffe.10.1-2.0085","DOIUrl":"https://doi.org/10.5325/jasseinsteffe.10.1-2.0085","url":null,"abstract":"abstract:This study analyzes the reliability of approximately 800,000 college grades from three higher educational institutions that vary in type and size. Comparisons of intraclass correlation coefficients (ICCs) reveal patterns among institutions and academic disciplines. Results from this study suggest that there are styles of grading associated with academic disciplines. Individual grade assignment ICC is comparable to rubric-derived learning assessments at one institution, and both are arguably too low to be used for decision making at that level. A reliability lift calculation suggests that grade averages over eight (or so) courses per student have enough reliability to be used as outcome measures. We discuss how grade statistics can complement efforts to assess program fairness, rigor, and comparability, as well as assessing the complexity of a curriculum. The R code and statistical notes are included to facilitate use by assessment and institutional research offices.","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"75 1","pages":"111 - 85"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86292010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-03DOI: 10.5325/jasseinsteffe.8.1-2.0086
J. Johnson, T. Weidner, James A. Jones, Allison K. Manwell
abstract:A widely accepted definition of academic course rigor is elusive within higher education. Although many conceptualizations of course rigor have been identified, both empirically and anecdotally, the need to operationally define and investigate course rigor is necessary given contemporary attacks on the quality of higher education. This article, part 1 of a two-part study, describes the three-phase process by which academic course and instructor rigor and corresponding rigor questions were defined and validated. Results revealed that five rigor components are critical to a definition of course rigor: critical thinking; challenge; mastering complex material; time and labor intensity; and production of credible work. These components were used to create questions distributed in 264 courses (2,557 students). The final phase of part 1 used factor analysis to confirm a strong one-factor solution, confirming the operational definition and corresponding rigor questions were acceptable to empirically evaluate course and instructor rigor. Keywords: rigor, course ratings
{"title":"Evaluating Academic Course Rigor, Part 1: Defining a Nebulous Construct","authors":"J. Johnson, T. Weidner, James A. Jones, Allison K. Manwell","doi":"10.5325/jasseinsteffe.8.1-2.0086","DOIUrl":"https://doi.org/10.5325/jasseinsteffe.8.1-2.0086","url":null,"abstract":"abstract:A widely accepted definition of academic course rigor is elusive within higher education. Although many conceptualizations of course rigor have been identified, both empirically and anecdotally, the need to operationally define and investigate course rigor is necessary given contemporary attacks on the quality of higher education. This article, part 1 of a two-part study, describes the three-phase process by which academic course and instructor rigor and corresponding rigor questions were defined and validated. Results revealed that five rigor components are critical to a definition of course rigor: critical thinking; challenge; mastering complex material; time and labor intensity; and production of credible work. These components were used to create questions distributed in 264 courses (2,557 students). The final phase of part 1 used factor analysis to confirm a strong one-factor solution, confirming the operational definition and corresponding rigor questions were acceptable to empirically evaluate course and instructor rigor. Keywords: rigor, course ratings","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"35 1","pages":"121 - 86"},"PeriodicalIF":0.0,"publicationDate":"2019-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84009993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-03DOI: 10.5325/jasseinsteffe.8.1-2.0022
Michael Ben-Avie, Brian D. Darrow
abstract:Our predictive models of student success provide evidence that students' incoming profiles do not define their destiny. We have found that the learning and developmental experiences that they have after enrollment are far more important in predicting persistence, academic achievement, and graduation. In contrast to immutable student demographic characteristics, we have found that malleable characteristics among students (such as academic habits of mind, sense of belonging, and future orientation) predict student success. Paying attention to students' development does not detract from their learning. In fact, promoting the highest levels of development among students seems to be what helps them reach high academic goals. Keywords: predictive modeling, student success, longitudinal, cohort study, malleable characteristics, learning and development
{"title":"Malleable and Immutable Student Characteristics: Incoming Profiles and Experiences on Campus","authors":"Michael Ben-Avie, Brian D. Darrow","doi":"10.5325/jasseinsteffe.8.1-2.0022","DOIUrl":"https://doi.org/10.5325/jasseinsteffe.8.1-2.0022","url":null,"abstract":"abstract:Our predictive models of student success provide evidence that students' incoming profiles do not define their destiny. We have found that the learning and developmental experiences that they have after enrollment are far more important in predicting persistence, academic achievement, and graduation. In contrast to immutable student demographic characteristics, we have found that malleable characteristics among students (such as academic habits of mind, sense of belonging, and future orientation) predict student success. Paying attention to students' development does not detract from their learning. In fact, promoting the highest levels of development among students seems to be what helps them reach high academic goals. Keywords: predictive modeling, student success, longitudinal, cohort study, malleable characteristics, learning and development","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"98 1","pages":"22 - 50"},"PeriodicalIF":0.0,"publicationDate":"2019-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84102452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}