利用语义技术对大型课程和MOOC进行形成性评估和评分

IF 2.7 Q1 EDUCATION & EDUCATIONAL RESEARCH Journal of Interactive Media in Education Pub Date : 2018-08-15 DOI:10.5334/JIME.468
M. Lancho, Mauro Hernández, Á. S. Paniagua, José María Luzón Encabo, Guillermo Jorge-Botana
{"title":"利用语义技术对大型课程和MOOC进行形成性评估和评分","authors":"M. Lancho, Mauro Hernández, Á. S. Paniagua, José María Luzón Encabo, Guillermo Jorge-Botana","doi":"10.5334/JIME.468","DOIUrl":null,"url":null,"abstract":"Formative assessment and personalised feedback are commonly recognised as key factors both for improving students’ performance and increasing their motivation and engagement (Gibbs and Simpson, 2005). Currently, in large and massive open online courses (MOOCs), technological solutions to give feedback are often limited to quizzes of different kinds. At present, one of our challenges is to provide feedback for open-ended questions through semantic technologies in a sustainable way. To face such a challenge, our academic team decided to use a test based on latent semantic analysis (LSA) and chose an automatic assessment tool named G-Rubric. G-Rubric was developed by researchers at the Developmental and Educational Psychology Department of UNED (Spanish national distance education university). By using G-Rubric, automated formative and iterative feedback was provided to students for different types of open-ended questions (70–800 words). This feedback allowed students to improve their answers and writing skills, thus contributing both to a better grasp of concepts and to the building of knowledge. In this paper, we present the promising results of our first experiences with UNED business degree students along three academic courses (2014–15, 2015–16 and 2016–17). These experiences show to what extent assessment software such as G-Rubric is mature enough to be used with students. It offers them enriched and personalised feedback that proved entirely satisfactory. Furthermore, G-Rubric could help to deal with the problems related to manual grading, even though our final goal is not to replace tutors by semantic tools, but to give support to tutors who are grading assignments.","PeriodicalId":45406,"journal":{"name":"Journal of Interactive Media in Education","volume":null,"pages":null},"PeriodicalIF":2.7000,"publicationDate":"2018-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"23","resultStr":"{\"title\":\"Using Semantic Technologies for Formative Assessment and Scoring in Large Courses and MOOCs\",\"authors\":\"M. Lancho, Mauro Hernández, Á. S. Paniagua, José María Luzón Encabo, Guillermo Jorge-Botana\",\"doi\":\"10.5334/JIME.468\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Formative assessment and personalised feedback are commonly recognised as key factors both for improving students’ performance and increasing their motivation and engagement (Gibbs and Simpson, 2005). Currently, in large and massive open online courses (MOOCs), technological solutions to give feedback are often limited to quizzes of different kinds. At present, one of our challenges is to provide feedback for open-ended questions through semantic technologies in a sustainable way. To face such a challenge, our academic team decided to use a test based on latent semantic analysis (LSA) and chose an automatic assessment tool named G-Rubric. G-Rubric was developed by researchers at the Developmental and Educational Psychology Department of UNED (Spanish national distance education university). By using G-Rubric, automated formative and iterative feedback was provided to students for different types of open-ended questions (70–800 words). This feedback allowed students to improve their answers and writing skills, thus contributing both to a better grasp of concepts and to the building of knowledge. In this paper, we present the promising results of our first experiences with UNED business degree students along three academic courses (2014–15, 2015–16 and 2016–17). These experiences show to what extent assessment software such as G-Rubric is mature enough to be used with students. It offers them enriched and personalised feedback that proved entirely satisfactory. Furthermore, G-Rubric could help to deal with the problems related to manual grading, even though our final goal is not to replace tutors by semantic tools, but to give support to tutors who are grading assignments.\",\"PeriodicalId\":45406,\"journal\":{\"name\":\"Journal of Interactive Media in Education\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2018-08-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"23\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Interactive Media in Education\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5334/JIME.468\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Interactive Media in Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5334/JIME.468","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 23

摘要

形成性评估和个性化反馈通常被认为是提高学生表现和提高他们的动机和参与度的关键因素(Gibbs和Simpson,2005)。目前,在大型开放在线课程(MOOC)中,提供反馈的技术解决方案通常仅限于不同类型的测验。目前,我们面临的挑战之一是通过语义技术以可持续的方式为开放式问题提供反馈。面对这样的挑战,我们的学术团队决定使用基于潜在语义分析(LSA)的测试,并选择了一个名为G-Rubric的自动评估工具。G-Rubric是由UNED(西班牙国立远程教育大学)发展与教育心理学系的研究人员开发的。通过使用G-Rubric,为学生提供不同类型的开放式问题(70-800个单词)的自动形成性和迭代反馈。这种反馈使学生能够提高他们的答案和写作技能,从而有助于更好地掌握概念和构建知识。在这篇论文中,我们展示了我们在三门学术课程(2014-2015、2015-16和2016-17)中与UNED商业学位学生的第一次经历的有希望的结果。这些经验表明,像G-Rubric这样的评估软件在多大程度上已经足够成熟,可以与学生一起使用。它为他们提供了丰富和个性化的反馈,这些反馈被证明是完全令人满意的。此外,G-Rubric可以帮助解决与手动评分相关的问题,尽管我们的最终目标不是用语义工具取代导师,而是为正在评分的导师提供支持。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Using Semantic Technologies for Formative Assessment and Scoring in Large Courses and MOOCs
Formative assessment and personalised feedback are commonly recognised as key factors both for improving students’ performance and increasing their motivation and engagement (Gibbs and Simpson, 2005). Currently, in large and massive open online courses (MOOCs), technological solutions to give feedback are often limited to quizzes of different kinds. At present, one of our challenges is to provide feedback for open-ended questions through semantic technologies in a sustainable way. To face such a challenge, our academic team decided to use a test based on latent semantic analysis (LSA) and chose an automatic assessment tool named G-Rubric. G-Rubric was developed by researchers at the Developmental and Educational Psychology Department of UNED (Spanish national distance education university). By using G-Rubric, automated formative and iterative feedback was provided to students for different types of open-ended questions (70–800 words). This feedback allowed students to improve their answers and writing skills, thus contributing both to a better grasp of concepts and to the building of knowledge. In this paper, we present the promising results of our first experiences with UNED business degree students along three academic courses (2014–15, 2015–16 and 2016–17). These experiences show to what extent assessment software such as G-Rubric is mature enough to be used with students. It offers them enriched and personalised feedback that proved entirely satisfactory. Furthermore, G-Rubric could help to deal with the problems related to manual grading, even though our final goal is not to replace tutors by semantic tools, but to give support to tutors who are grading assignments.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Interactive Media in Education
Journal of Interactive Media in Education EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
6.40
自引率
6.70%
发文量
8
审稿时长
16 weeks
期刊最新文献
Digital Scholarship from the Periphery: Insights from Researchers in Chile on Academia.edu and ResearchGate Interacting through Blogs in Theatre/Drama Education: A Greek Case Study Factual vs. Fake News: Teachers’ Lens on Critical Media Literacy Education in EFL Classes An Overview of Student Perceptions of Hybrid Flexible Learning at a London HEI Investigating the Views and Use of Stackable Microcredentials within a Postgraduate Certificate in Academic Practice
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1