Pub Date : 2019-12-03DOI: 10.5325/jasseinsteffe.8.1-2.0071
P. Shankar
abstract:A methodology to assess the student learning outcomes in the course "Probability for Engineers" has been implemented. The indirect assessment strategy relied on student surveys at the beginning and the conclusion of the course. The direct assessment was based on tracking the student scores in midterm examination I, midterm examination II, and the final examination, question by question on topics related to the learning outcomes. The enhancements in performance in three topics grouped to include all learning outcomes were monitored as the course progressed. Robustness of the quantitative measures of indirect and direct assessments was ensured through hypothesis tests. The results demonstrated that the goals of the course were met in terms of the higher level of understanding by the students at the end of the term. While there is still room for improvement, the methodology developed here can be easily extended to courses in other disciplines. Keywords: direct and indirect assessments, engineering education, engineering probability, outcomes assessment, probability and random variables
{"title":"Outcomes Assessment Methodology for a Course in Probability and Random Variables","authors":"P. Shankar","doi":"10.5325/jasseinsteffe.8.1-2.0071","DOIUrl":"https://doi.org/10.5325/jasseinsteffe.8.1-2.0071","url":null,"abstract":"abstract:A methodology to assess the student learning outcomes in the course \"Probability for Engineers\" has been implemented. The indirect assessment strategy relied on student surveys at the beginning and the conclusion of the course. The direct assessment was based on tracking the student scores in midterm examination I, midterm examination II, and the final examination, question by question on topics related to the learning outcomes. The enhancements in performance in three topics grouped to include all learning outcomes were monitored as the course progressed. Robustness of the quantitative measures of indirect and direct assessments was ensured through hypothesis tests. The results demonstrated that the goals of the course were met in terms of the higher level of understanding by the students at the end of the term. While there is still room for improvement, the methodology developed here can be easily extended to courses in other disciplines. Keywords: direct and indirect assessments, engineering education, engineering probability, outcomes assessment, probability and random variables","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"98 1","pages":"71 - 85"},"PeriodicalIF":0.0,"publicationDate":"2019-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72707336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-03DOI: 10.5325/jasseinsteffe.8.1-2.0051
M. Mahalingam, P. Blumberg
abstract:Assessments that are conducted to primarily satisfy external requirements may overlook essential steps that ensure improvement in outcomes. The most vital steps for improvement are identifying strengths and weaknesses of the course or program, followed by an action plan to remedy the weaknesses. This article showcases two examples of sustained assessments with a focus on improvement that resulted in gains in student learning outcomes at course and program levels. As a result of this approach, the D, F, or Withdraw rate decreased by almost 20% in an introductory gateway course, and at the program level students' national percentile scores on an end-of-program assessment, the major field test, improved. Keywords: continuous improvement, course assessment, program assessment, sustained assessment
{"title":"Using Sustained Assessment Practices for Improving Student Learning Outcomes At Course and Program Levels","authors":"M. Mahalingam, P. Blumberg","doi":"10.5325/jasseinsteffe.8.1-2.0051","DOIUrl":"https://doi.org/10.5325/jasseinsteffe.8.1-2.0051","url":null,"abstract":"abstract:Assessments that are conducted to primarily satisfy external requirements may overlook essential steps that ensure improvement in outcomes. The most vital steps for improvement are identifying strengths and weaknesses of the course or program, followed by an action plan to remedy the weaknesses. This article showcases two examples of sustained assessments with a focus on improvement that resulted in gains in student learning outcomes at course and program levels. As a result of this approach, the D, F, or Withdraw rate decreased by almost 20% in an introductory gateway course, and at the program level students' national percentile scores on an end-of-program assessment, the major field test, improved. Keywords: continuous improvement, course assessment, program assessment, sustained assessment","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"98 1","pages":"51 - 70"},"PeriodicalIF":0.0,"publicationDate":"2019-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78149928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-03DOI: 10.5325/jasseinsteffe.8.1-2.0001
S. Culver, G. Phipps
abstract:Since the emergence of assessment as a critical force in higher education, "faculty involvement" has repeatedly been identified as essential, yet approaches to assessment have been less than welcoming to faculty and tend to be more accountability- driven than learning-focused and improvement-driven. Past research on the perceptions of faculty regarding outcome assessment has tended to focus on the negative—why faculty are not interested or do not participate. For our study, we were interested in HBCU faculty perceptions regarding assessment, but from a positive perspective. Faculty were asked to distinguish importance among 15 items documented in the literature as benefits to doing assessment. Results indicate that faculty see assessment as taking care of accountability issues but also helping with curricular and program revisions. Interestingly, they see assessment as less likely to facilitate faculty discussions about curriculum. Implications for conducting assessment and further research are discussed. Keywords: faculty, perceptions, curriculum
{"title":"According to Faculty, the Most Important Reasons for Doing Assessment at an HBCU","authors":"S. Culver, G. Phipps","doi":"10.5325/jasseinsteffe.8.1-2.0001","DOIUrl":"https://doi.org/10.5325/jasseinsteffe.8.1-2.0001","url":null,"abstract":"abstract:Since the emergence of assessment as a critical force in higher education, \"faculty involvement\" has repeatedly been identified as essential, yet approaches to assessment have been less than welcoming to faculty and tend to be more accountability- driven than learning-focused and improvement-driven. Past research on the perceptions of faculty regarding outcome assessment has tended to focus on the negative—why faculty are not interested or do not participate. For our study, we were interested in HBCU faculty perceptions regarding assessment, but from a positive perspective. Faculty were asked to distinguish importance among 15 items documented in the literature as benefits to doing assessment. Results indicate that faculty see assessment as taking care of accountability issues but also helping with curricular and program revisions. Interestingly, they see assessment as less likely to facilitate faculty discussions about curriculum. Implications for conducting assessment and further research are discussed. Keywords: faculty, perceptions, curriculum","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"5 1","pages":"1 - 21"},"PeriodicalIF":0.0,"publicationDate":"2019-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84288723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-07DOI: 10.5325/JASSEINSTEFFE.7.1-2.0001
L. Clark, Julia Kara-Soteriou, Michael P. Alfano
Abstract:This study employed a continuous quality framework with an emphasis on closing the assessment loop and guiding an intervention process for teacher certification candidates at a master's-level regional institution. Using student academic performance data, a logistic regression model was applied to predict success on a certification exam, the Foundations of Reading Test. Data analysis revealed that the odds of passing this exam were influenced by students' SAT total scores, grade point average, and grade in a required reading instruction course. The results of this model allow the institution to identify and proactively provide academic intervention to at-risk students. Keywords: Teacher certification exam, foundations of reading, predictive model, academic interventions, institutional data
{"title":"Using Institutional Data to Predict Teacher Candidate Performance on a Certification Exam","authors":"L. Clark, Julia Kara-Soteriou, Michael P. Alfano","doi":"10.5325/JASSEINSTEFFE.7.1-2.0001","DOIUrl":"https://doi.org/10.5325/JASSEINSTEFFE.7.1-2.0001","url":null,"abstract":"Abstract:This study employed a continuous quality framework with an emphasis on closing the assessment loop and guiding an intervention process for teacher certification candidates at a master's-level regional institution. Using student academic performance data, a logistic regression model was applied to predict success on a certification exam, the Foundations of Reading Test. Data analysis revealed that the odds of passing this exam were influenced by students' SAT total scores, grade point average, and grade in a required reading instruction course. The results of this model allow the institution to identify and proactively provide academic intervention to at-risk students. Keywords: Teacher certification exam, foundations of reading, predictive model, academic interventions, institutional data","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"156 1","pages":"1 - 19"},"PeriodicalIF":0.0,"publicationDate":"2018-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73735977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-07DOI: 10.5325/JASSEINSTEFFE.7.1-2.0069
Gary Blau, M. A. Gaffney, YJ Kim, S. Jarrell
Abstract:This study's purposes were to further develop reliable grading assessment learning perception (GALP) scales, and investigate their relationships to two distinct self-reported post-graduation outcomes: securing a full-time job versus securing a full-time job consistent with one's major. GALP measures student perceptions that the grading methods used best reflect their course knowledge and skills. Two semesters of senior business undergraduates (Fall 2016, n = 417 and Spring 2017, n = 857) were sampled. Four GALP scales—Individual Engagement, Team-based, Exam-based, and Individual Creative—were identified. Logistic regression analyses showed that beyond internship experience, Individual Creative GALP was significantly related to post-graduation employment. We argue that adding one closed-response item on a university-level teaching evaluation form asking students if the grading methods used in a course best reflected the student's course knowledge and skills would be useful, particularly if followed by an open item, where students could add their thoughts. Keywords: grading assessment learning perceptions, post-graduation outcomes, internships, teaching evaluation
{"title":"Do Grading Assessment Learning Perceptions Correlate to Post-Graduation Outcomes?","authors":"Gary Blau, M. A. Gaffney, YJ Kim, S. Jarrell","doi":"10.5325/JASSEINSTEFFE.7.1-2.0069","DOIUrl":"https://doi.org/10.5325/JASSEINSTEFFE.7.1-2.0069","url":null,"abstract":"Abstract:This study's purposes were to further develop reliable grading assessment learning perception (GALP) scales, and investigate their relationships to two distinct self-reported post-graduation outcomes: securing a full-time job versus securing a full-time job consistent with one's major. GALP measures student perceptions that the grading methods used best reflect their course knowledge and skills. Two semesters of senior business undergraduates (Fall 2016, n = 417 and Spring 2017, n = 857) were sampled. Four GALP scales—Individual Engagement, Team-based, Exam-based, and Individual Creative—were identified. Logistic regression analyses showed that beyond internship experience, Individual Creative GALP was significantly related to post-graduation employment. We argue that adding one closed-response item on a university-level teaching evaluation form asking students if the grading methods used in a course best reflected the student's course knowledge and skills would be useful, particularly if followed by an open item, where students could add their thoughts. Keywords: grading assessment learning perceptions, post-graduation outcomes, internships, teaching evaluation","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"32 1","pages":"69 - 91"},"PeriodicalIF":0.0,"publicationDate":"2018-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88981267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-07DOI: 10.5325/JASSEINSTEFFE.7.1-2.0020
Jennifer Danley-Scott, G. Scott
Abstract:While faculty are often treated as a homogenous group in literature discussing student learning outcomes assessment, this should not be the case. Drawing on responses to a national survey, we show that full-time faculty, both tenure-line and non-tenure-line, are likely to be invited to help design and give feedback on assessments implemented in disciplines, institutions, and classes. Full-time faculty are also likely to be invited to interpret the assessment results and offer feedback on how to use the results to close the loop. Part-time faculty members are not as likely as their full-time colleagues to be invited to participate in the various stages of the assessment loop. The implications of these findings are discussed. Keywords: assessment, part-time faculty, student learning outcomes, success, retention
{"title":"Why All Faculty Should Have a Seat at the Assessment Table","authors":"Jennifer Danley-Scott, G. Scott","doi":"10.5325/JASSEINSTEFFE.7.1-2.0020","DOIUrl":"https://doi.org/10.5325/JASSEINSTEFFE.7.1-2.0020","url":null,"abstract":"Abstract:While faculty are often treated as a homogenous group in literature discussing student learning outcomes assessment, this should not be the case. Drawing on responses to a national survey, we show that full-time faculty, both tenure-line and non-tenure-line, are likely to be invited to help design and give feedback on assessments implemented in disciplines, institutions, and classes. Full-time faculty are also likely to be invited to interpret the assessment results and offer feedback on how to use the results to close the loop. Part-time faculty members are not as likely as their full-time colleagues to be invited to participate in the various stages of the assessment loop. The implications of these findings are discussed. Keywords: assessment, part-time faculty, student learning outcomes, success, retention","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"35 1","pages":"20 - 40"},"PeriodicalIF":0.0,"publicationDate":"2018-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81441169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-07DOI: 10.5325/JASSEINSTEFFE.7.1-2.0092
Tori L Colson, B. Berg, T. Hunt, Z. Mitchell
Abstract:This article discusses how a faculty-driven process utilized the mantra of "simple, transparent, and less burdensome" to reform assessment at a regional Midwestern university. We detail how an assessment task force transformed an unwieldy, cumbersome, and disjointed assessment scheme into a systematic, specific, and actionable assessment plan that also began to cultivate a culture of assessment that focuses on improving student learning instead of assessment focused on compliance, that is, meeting accreditation requirements. Keywords: higher education, assessment, general education curriculum assessment
{"title":"Simple, Transparent, and Less Burdensome: Re-Envisioning Core Assessment at a Regional Public University","authors":"Tori L Colson, B. Berg, T. Hunt, Z. Mitchell","doi":"10.5325/JASSEINSTEFFE.7.1-2.0092","DOIUrl":"https://doi.org/10.5325/JASSEINSTEFFE.7.1-2.0092","url":null,"abstract":"Abstract:This article discusses how a faculty-driven process utilized the mantra of \"simple, transparent, and less burdensome\" to reform assessment at a regional Midwestern university. We detail how an assessment task force transformed an unwieldy, cumbersome, and disjointed assessment scheme into a systematic, specific, and actionable assessment plan that also began to cultivate a culture of assessment that focuses on improving student learning instead of assessment focused on compliance, that is, meeting accreditation requirements. Keywords: higher education, assessment, general education curriculum assessment","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"76 1","pages":"114 - 92"},"PeriodicalIF":0.0,"publicationDate":"2018-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82267347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-07DOI: 10.5325/JASSEINSTEFFE.7.1-2.0041
M. Northcote, Lindsay Morton, Anthony Williams, Peter Kilgour, S. Hattingh
Abstract:The concept of Adaptively-Released Assessment Feedback (ARAF) is relatively new and, to date, has had limited application in the university sector. This article looks at the applications of ARAF into the assessment of courses in three different contexts across multiple disciplines at both undergraduate and postgraduate course levels. The article outlines the ARAF strategies and their potential for promoting a deeper learning process by enhancing student engagement with feedback. Qualitative data from students are utilized to understand student perceptions of ARAF strategies. Students reported that ARAF increased engagement with assessment feedback and, in some cases, provoked deeper reflection and encouraged them to plan their approach to future assessment tasks. Keywords: quantitative feedback, qualitative feedback, adaptively-released assessment feedback (ARAF) strategies, assessment design
{"title":"Transforming Assessment Feedback Design: Students' Responses to Adaptively-Released Assessment Feedback (ARAF) Strategies","authors":"M. Northcote, Lindsay Morton, Anthony Williams, Peter Kilgour, S. Hattingh","doi":"10.5325/JASSEINSTEFFE.7.1-2.0041","DOIUrl":"https://doi.org/10.5325/JASSEINSTEFFE.7.1-2.0041","url":null,"abstract":"Abstract:The concept of Adaptively-Released Assessment Feedback (ARAF) is relatively new and, to date, has had limited application in the university sector. This article looks at the applications of ARAF into the assessment of courses in three different contexts across multiple disciplines at both undergraduate and postgraduate course levels. The article outlines the ARAF strategies and their potential for promoting a deeper learning process by enhancing student engagement with feedback. Qualitative data from students are utilized to understand student perceptions of ARAF strategies. Students reported that ARAF increased engagement with assessment feedback and, in some cases, provoked deeper reflection and encouraged them to plan their approach to future assessment tasks. Keywords: quantitative feedback, qualitative feedback, adaptively-released assessment feedback (ARAF) strategies, assessment design","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"3 1","pages":"41 - 68"},"PeriodicalIF":0.0,"publicationDate":"2018-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78281511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editor’s Note","authors":"George Anthony Peffer","doi":"10.1353/cer.2017.0016","DOIUrl":"https://doi.org/10.1353/cer.2017.0016","url":null,"abstract":"","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"32 1","pages":"iv - vi"},"PeriodicalIF":0.0,"publicationDate":"2017-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83742488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.5325/JASSEINSTEFFE.6.2.0142
G. Blau, John Dimino, P. Demaria, Clyde Beverly, Marcy Chessler
Three online undergraduate survey samples were collected: not-in-counseling (NIC); initial counseling session (ICS), that is, only triage; and brief counseling (BC), a median of four counseling sessions over an eight-week average. Results showed that mental health concerns significantly explained intent to graduate after controlling for background variables and institutional commitment for the NIC and ICS samples. For the smaller BC sample, composed of freshmen and transfer students or “transitional students,” counseling treatment led to a significant decrease in mental health concerns. For “transitional” students, a university counseling center may be particularly useful in helping these students adjust to their new college environment and persist toward graduation.
{"title":"Mental Health Concerns’ Impact on Graduation Intent and Improvement for Brief Counseling","authors":"G. Blau, John Dimino, P. Demaria, Clyde Beverly, Marcy Chessler","doi":"10.5325/JASSEINSTEFFE.6.2.0142","DOIUrl":"https://doi.org/10.5325/JASSEINSTEFFE.6.2.0142","url":null,"abstract":"Three online undergraduate survey samples were collected: not-in-counseling (NIC); initial counseling session (ICS), that is, only triage; and brief counseling (BC), a median of four counseling sessions over an eight-week average. Results showed that mental health concerns significantly explained intent to graduate after controlling for background variables and institutional commitment for the NIC and ICS samples. For the smaller BC sample, composed of freshmen and transfer students or “transitional students,” counseling treatment led to a significant decrease in mental health concerns. For “transitional” students, a university counseling center may be particularly useful in helping these students adjust to their new college environment and persist toward graduation.","PeriodicalId":56185,"journal":{"name":"Journal of Assessment and Institutional Effectiveness","volume":"45 1","pages":"142 - 164"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88923177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}