Pub Date : 2018-07-01DOI: 10.4018/IJTEPD.2018070104
Akrum Helfaya, James O'Neill
This article describes how assessment and feedback represent two key factors that affect students' learning. Using e-assessment with prompt e-feedback reduces the gap between present and desired performance and is considered to be a reflexive and dynamic system in dealing with the new generation of digital natives. Action research was used to investigate students' perception of using computer-based assessment (CBA) and/or computer-based feedback (CBF) in teaching and learning process. Both semi-structured interviews and focus groups were conducted with 44 UG students to assess their perceptions of using CBA and CBF. Findings show that students are generally agreed on the use of and benefits of CBA and/or CBF in teaching accounting and non-accounting modules. For example, these results reveal that many participants valued working online compared to traditional assessment and appreciated the instant feedback they received. Additionally, technology can provide an avenue for assessment and personalised and comprehensive prompt feedback that diverse and digital students need in the 21st Century Higher Education.
{"title":"Using Computer-Based Assessment and Feedback","authors":"Akrum Helfaya, James O'Neill","doi":"10.4018/IJTEPD.2018070104","DOIUrl":"https://doi.org/10.4018/IJTEPD.2018070104","url":null,"abstract":"This article describes how assessment and feedback represent two key factors that affect students' learning. Using e-assessment with prompt e-feedback reduces the gap between present and desired performance and is considered to be a reflexive and dynamic system in dealing with the new generation of digital natives. Action research was used to investigate students' perception of using computer-based assessment (CBA) and/or computer-based feedback (CBF) in teaching and learning process. Both semi-structured interviews and focus groups were conducted with 44 UG students to assess their perceptions of using CBA and CBF. Findings show that students are generally agreed on the use of and benefits of CBA and/or CBF in teaching accounting and non-accounting modules. For example, these results reveal that many participants valued working online compared to traditional assessment and appreciated the instant feedback they received. Additionally, technology can provide an avenue for assessment and personalised and comprehensive prompt feedback that diverse and digital students need in the 21st Century Higher Education.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132299933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gamification is a new and upcoming trend that is predicted by many to further enhance the field of educational technology in the new millennium. The use of gamification has fared well in the corporate world and is gradually transcending into the educational arena. The usage of game elements such as points, badges and leader board can assist in keeping the students not only motivated but also engaged to the teaching and learning process in the school. As learning and assessment come hand in hand as a knowledge acquiring process in a classroom, therefore it should be identified whether or not gamification can be truly utilized in the form of a learning and assessment tool in the teaching and learning process. This paper will discuss about the repercussions of using gamification as a learning and assessment tool based on the review of several studies carried out in the field of gamification.
{"title":"Gamification's Role as a Learning and Assessment Tool in Education","authors":"SuhadiSalihuddin, MohamedHasnah, AbdullahZaleha, ZaidNorasykin Mohd, ArisBaharuddin, SanmugamMageswaran","doi":"10.4018/978-1-7998-0420-8.ch038","DOIUrl":"https://doi.org/10.4018/978-1-7998-0420-8.ch038","url":null,"abstract":"Gamification is a new and upcoming trend that is predicted by many to further enhance the field of educational technology in the new millennium. The use of gamification has fared well in the corporate world and is gradually transcending into the educational arena. The usage of game elements such as points, badges and leader board can assist in keeping the students not only motivated but also engaged to the teaching and learning process in the school. As learning and assessment come hand in hand as a knowledge acquiring process in a classroom, therefore it should be identified whether or not gamification can be truly utilized in the form of a learning and assessment tool in the teaching and learning process. This paper will discuss about the repercussions of using gamification as a learning and assessment tool based on the review of several studies carried out in the field of gamification.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127659309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.4018/978-1-7998-0420-8.ch045
SarireteAkila, BrahimiTayeb, I. Mohammed
In recent years, student outcomes in higher education has captured the interest and concern of accreditation communities, governments, employers as well as international organizations. Student outcomes is becoming the principal gauge of higher education's effectiveness and accreditation bodies expect learning outcomes to be well defined, articulated, assessed, verified, and used as a guide for future improvement. The paper investigates the impact of accreditation on student outcomes in higher education and examines the impact of two accreditation bodies on student outcomes, namely: The National Commission for Academic Accreditation and Assessment (NCAAA) established by the Higher Council of Education in Saudi Arabia and the Accreditation Board for Engineering and Technology Inc. (ABET). Results from a course in Mathematics at Effat University, Jeddah, KSA, showed how important the learning outcome is in the process of re-evaluating strategies and methodologies used in the learning process.
{"title":"The Impact of Accreditation on Student Learning Outcomes","authors":"SarireteAkila, BrahimiTayeb, I. Mohammed","doi":"10.4018/978-1-7998-0420-8.ch045","DOIUrl":"https://doi.org/10.4018/978-1-7998-0420-8.ch045","url":null,"abstract":"In recent years, student outcomes in higher education has captured the interest and concern of accreditation communities, governments, employers as well as international organizations. Student outcomes is becoming the principal gauge of higher education's effectiveness and accreditation bodies expect learning outcomes to be well defined, articulated, assessed, verified, and used as a guide for future improvement. The paper investigates the impact of accreditation on student outcomes in higher education and examines the impact of two accreditation bodies on student outcomes, namely: The National Commission for Academic Accreditation and Assessment (NCAAA) established by the Higher Council of Education in Saudi Arabia and the Accreditation Board for Engineering and Technology Inc. (ABET). Results from a course in Mathematics at Effat University, Jeddah, KSA, showed how important the learning outcome is in the process of re-evaluating strategies and methodologies used in the learning process.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123335154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-16DOI: 10.4018/978-1-5225-0204-3.CH008
Dante Cisterna, A. Gotwals, Tara Kintz, John L. Lane, Edward D. Roeber
The purpose of this chapter is to describe a statewide professional development program designed to improve teachers' knowledge and practices around formative assessment. The authors describe three key characteristics that guided the program design: (1) providing a framework for formative assessment; (2) providing opportunities for flexible implementation; and (3) providing support for capacity development. The chapter provides examples of the ways the program was instantiated at the local level, discusses the potentials and challenges related to the professional development implementation, and illustrates connections to teacher learning about formative assessment. The authors provide recommendations that may help individuals who design and deliver professional development that balance large-scale program expectations (e.g., state-level) with local and situated contexts of implementation. General implications for the design and enactment of situated professional development models are also described.
{"title":"Potentials and Challenges of a Situated Professional Development Model","authors":"Dante Cisterna, A. Gotwals, Tara Kintz, John L. Lane, Edward D. Roeber","doi":"10.4018/978-1-5225-0204-3.CH008","DOIUrl":"https://doi.org/10.4018/978-1-5225-0204-3.CH008","url":null,"abstract":"The purpose of this chapter is to describe a statewide professional development program designed to improve teachers' knowledge and practices around formative assessment. The authors describe three key characteristics that guided the program design: (1) providing a framework for formative assessment; (2) providing opportunities for flexible implementation; and (3) providing support for capacity development. The chapter provides examples of the ways the program was instantiated at the local level, discusses the potentials and challenges related to the professional development implementation, and illustrates connections to teacher learning about formative assessment. The authors provide recommendations that may help individuals who design and deliver professional development that balance large-scale program expectations (e.g., state-level) with local and situated contexts of implementation. General implications for the design and enactment of situated professional development models are also described.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125068613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, information technologies are catching growing attention and their application to English language learning is also prospering. Using a Foreign Language Classroom Anxiety Scale and College English Test Band 4, this study explored the different impacts of the e-collaborative learning via QQ group and the traditional multi-media learning on learning outcomes and anxiety among tertiary students. Around 70 participants were involved in different styles of learning and instruction and received both surveys and tests. The results showed that the QQ group-based e-collaborative learning could significantly decrease anxiety but no significant gain was found in learning outcomes compared with the traditional multi-media learning. Correlation between learning outcomes and variables of anxiety was also studied, which resulted in no significant findings. Both disadvantages and advantages of this study were discussed and future research and advice to practitioners were recommended as well.
{"title":"The Impact of the E-Collaborative and Traditional Learning Styles on Learning Outcomes and Anxiety","authors":"Yu Zhonggen","doi":"10.4018/IJEC.2016040103","DOIUrl":"https://doi.org/10.4018/IJEC.2016040103","url":null,"abstract":"Nowadays, information technologies are catching growing attention and their application to English language learning is also prospering. Using a Foreign Language Classroom Anxiety Scale and College English Test Band 4, this study explored the different impacts of the e-collaborative learning via QQ group and the traditional multi-media learning on learning outcomes and anxiety among tertiary students. Around 70 participants were involved in different styles of learning and instruction and received both surveys and tests. The results showed that the QQ group-based e-collaborative learning could significantly decrease anxiety but no significant gain was found in learning outcomes compared with the traditional multi-media learning. Correlation between learning outcomes and variables of anxiety was also studied, which resulted in no significant findings. Both disadvantages and advantages of this study were discussed and future research and advice to practitioners were recommended as well.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128928131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.4018/978-1-7998-0420-8.ch071
C. C. Buford, Brian O'Leary
Measurement of General Mental Ability (GMA) using computer mediated simulations (CMS) may provide a new method of assessment. CMS used for assessments of GMA may be strongly affected by a participant's prior experience, and the predictive utility of CMS for assessment of GMA is largely unexplored. In this experiment, an existing computer video game was modified to function as an assessment of Fluid Intelligence (Gf) while controlling for participants' prior experience with CMS. Results indicated a positive relationship between tests of Fluid Intelligence (Gf) and game performance (r = .44 - 46), a weaker relationship between game performance and Crystallized Intelligence (Gc) (r = .27), no significant relationship to g, and no significant moderating effect for participants' prior experience upon any of these relationships. Based on these findings, CMS appear to hold promise as a new assessment tool for factors of GMA.
{"title":"Assessment of Fluid Intelligence Utilizing a Computer Simulated Game","authors":"C. C. Buford, Brian O'Leary","doi":"10.4018/978-1-7998-0420-8.ch071","DOIUrl":"https://doi.org/10.4018/978-1-7998-0420-8.ch071","url":null,"abstract":"Measurement of General Mental Ability (GMA) using computer mediated simulations (CMS) may provide a new method of assessment. CMS used for assessments of GMA may be strongly affected by a participant's prior experience, and the predictive utility of CMS for assessment of GMA is largely unexplored. In this experiment, an existing computer video game was modified to function as an assessment of Fluid Intelligence (Gf) while controlling for participants' prior experience with CMS. Results indicated a positive relationship between tests of Fluid Intelligence (Gf) and game performance (r = .44 - 46), a weaker relationship between game performance and Crystallized Intelligence (Gc) (r = .27), no significant relationship to g, and no significant moderating effect for participants' prior experience upon any of these relationships. Based on these findings, CMS appear to hold promise as a new assessment tool for factors of GMA.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124587739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4018/978-1-7998-0420-8.ch033
Yufeng Qian
Computer simulation as both an instructional strategy and technology holds great potential to transform teaching and learning. However, there have been terminological ambiguity and typological inconsistency when the term computer simulation is used in the education setting. This chapter identifies three core components of computer simulation, and develops a learning outcome-based categorization, linking together computer simulation's technical affordances, learning opportunities, and learning outcomes. Exemplary computer simulations in higher education are identified to illustrate the unique affordances, opportunities, and outcomes of each type of computer simulation.
{"title":"Computer Simulation in Higher Education","authors":"Yufeng Qian","doi":"10.4018/978-1-7998-0420-8.ch033","DOIUrl":"https://doi.org/10.4018/978-1-7998-0420-8.ch033","url":null,"abstract":"Computer simulation as both an instructional strategy and technology holds great potential to transform teaching and learning. However, there have been terminological ambiguity and typological inconsistency when the term computer simulation is used in the education setting. This chapter identifies three core components of computer simulation, and develops a learning outcome-based categorization, linking together computer simulation's technical affordances, learning opportunities, and learning outcomes. Exemplary computer simulations in higher education are identified to illustrate the unique affordances, opportunities, and outcomes of each type of computer simulation.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125087301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4018/978-1-4666-9441-5.CH019
Y. Rosen, M. Mosharraf
A concept map is a graphical tool for representing knowledge structure in the form of a graph whose nodes represent concepts, while arcs between nodes correspond to interrelations between them. Using a concept map engages students in a variety of critical and complex thinking, such as evaluating, analyzing, and decision making. Although the potential use of concept maps to assess students' knowledge has been recognized, concept maps are traditionally used as instructional tools. The chapter introduces a technology-enabled three-phase Evidence-Centered Concept Map (ECCM) designed to make students' thinking visible in critical thinking assessment tasks that require students to analyze claims and supporting evidence on a topic and to draw conclusions. Directions for future research are discussed in terms of their implications to technology tools in large-scale assessment programs that target higher-order thinking skills.
{"title":"Evidence-Centered Concept Map in Computer-Based Assessment of Critical Thinking","authors":"Y. Rosen, M. Mosharraf","doi":"10.4018/978-1-4666-9441-5.CH019","DOIUrl":"https://doi.org/10.4018/978-1-4666-9441-5.CH019","url":null,"abstract":"A concept map is a graphical tool for representing knowledge structure in the form of a graph whose nodes represent concepts, while arcs between nodes correspond to interrelations between them. Using a concept map engages students in a variety of critical and complex thinking, such as evaluating, analyzing, and decision making. Although the potential use of concept maps to assess students' knowledge has been recognized, concept maps are traditionally used as instructional tools. The chapter introduces a technology-enabled three-phase Evidence-Centered Concept Map (ECCM) designed to make students' thinking visible in critical thinking assessment tasks that require students to analyze claims and supporting evidence on a topic and to draw conclusions. Directions for future research are discussed in terms of their implications to technology tools in large-scale assessment programs that target higher-order thinking skills.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"28 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116377528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4018/978-1-7998-0420-8.ch029
P. Muñoz-Merino, J. Ruipérez-Valiente, Juan Luis Sanz Moreno, C. D. Kloos
This chapter analyzes the different implications of the new MOOC paradigm in assessment activities, emphasizing the differences with respect to other non MOOC educational technology environments and giving an insight about the redesign of assessment activities for MOOCs. The chapter also compares the different assessment activities that are available in some of the most used MOOC platforms at present. In addition, the process of design of MOOC assessment activities is analyzed. Specific examples are given about how to design and create different types of assessment activities. The Genghis authoring tool as a solution for the creation of some types of exercises in the Khan Academy platform is presented. Finally, there is an analysis of the learning analytics features related to assessment activities that are present in MOOCs. Moreover, some guidelines are provided about how to interpret and take advantage of this information.
{"title":"Assessment Activities in Massive Open On-Line Courses","authors":"P. Muñoz-Merino, J. Ruipérez-Valiente, Juan Luis Sanz Moreno, C. D. Kloos","doi":"10.4018/978-1-7998-0420-8.ch029","DOIUrl":"https://doi.org/10.4018/978-1-7998-0420-8.ch029","url":null,"abstract":"This chapter analyzes the different implications of the new MOOC paradigm in assessment activities, emphasizing the differences with respect to other non MOOC educational technology environments and giving an insight about the redesign of assessment activities for MOOCs. The chapter also compares the different assessment activities that are available in some of the most used MOOC platforms at present. In addition, the process of design of MOOC assessment activities is analyzed. Specific examples are given about how to design and create different types of assessment activities. The Genghis authoring tool as a solution for the creation of some types of exercises in the Khan Academy platform is presented. Finally, there is an analysis of the learning analytics features related to assessment activities that are present in MOOCs. Moreover, some guidelines are provided about how to interpret and take advantage of this information.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114517834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.4018/978-1-5225-0531-0.CH008
R. Hamer, E. J. Rossum
Understanding means different things to different people, influencing what and how students learn and teachers teach. Mainstream understanding of understanding has not progressed beyond the first level of constructivist learning and thinking, ie academic understanding. This study, based on 167 student narratives, presents two hitherto unknown conceptions of understanding matching more complex ways of knowing, understanding-in-relativism and understanding-in-supercomplexity requiring the development of more complex versions of constructive alignment. Students comment that multiple choice testing encourages learning focused on recall and recognition, while academic understanding is not assessed often and more complex forms of understanding are hardly assessed at all in higher education. However, if study success depends on assessments-of-learning that credit them for meaning oriented learning and deeper understanding, students will put in effort to succeed.
{"title":"Students' Conceptions of Understanding and Its Assessment","authors":"R. Hamer, E. J. Rossum","doi":"10.4018/978-1-5225-0531-0.CH008","DOIUrl":"https://doi.org/10.4018/978-1-5225-0531-0.CH008","url":null,"abstract":"Understanding means different things to different people, influencing what and how students learn and teachers teach. Mainstream understanding of understanding has not progressed beyond the first level of constructivist learning and thinking, ie academic understanding. This study, based on 167 student narratives, presents two hitherto unknown conceptions of understanding matching more complex ways of knowing, understanding-in-relativism and understanding-in-supercomplexity requiring the development of more complex versions of constructive alignment. Students comment that multiple choice testing encourages learning focused on recall and recognition, while academic understanding is not assessed often and more complex forms of understanding are hardly assessed at all in higher education. However, if study success depends on assessments-of-learning that credit them for meaning oriented learning and deeper understanding, students will put in effort to succeed.","PeriodicalId":320077,"journal":{"name":"Learning and Performance Assessment","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122009075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}