Can Information Retrieval techniques automatic assessment challenges?

M. Hasan
{"title":"Can Information Retrieval techniques automatic assessment challenges?","authors":"M. Hasan","doi":"10.1109/ICCIT.2009.5407259","DOIUrl":null,"url":null,"abstract":"In Information Retrieval (IR), the similarity scores between a query and a set of documents are calculated, and the relevant documents are ranked based on their similarity scores. IR systems often consider queries as short documents containing only a few words in calculating document similarity score. In Computer Aided Assessment (CAA) of narrative answers, when model answers are available, the similarity score between Students' Answers and the respective Model Answer may be a good quality-indicator. With such an analogy in mind, we applied basic IR techniques in the context of automatic assessment and discussed our findings. In this paper, we explain the development of a web-based automatic assessment system that incorporates 5 different text analysis techniques for automatic assessment of narrative answers using vector space framework. We apply Uni-gram, Bi-gram, TF.IDF, Keyphrase Extraction, and Keyphrase with Synonym Resolution before representing model answers and students' answers as document vectors; and then we compute document similarity scores. The experimental results based on 30 narrative questions with 30 model answers, and 300 student's answers (from 10 students) show that the correlation of automatic assessment with human assessment is higher when advanced text processing techniques such as Keyphrase Extraction and Synonym Resolution are applied.","PeriodicalId":443258,"journal":{"name":"2009 12th International Conference on Computers and Information Technology","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 12th International Conference on Computers and Information Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCIT.2009.5407259","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

In Information Retrieval (IR), the similarity scores between a query and a set of documents are calculated, and the relevant documents are ranked based on their similarity scores. IR systems often consider queries as short documents containing only a few words in calculating document similarity score. In Computer Aided Assessment (CAA) of narrative answers, when model answers are available, the similarity score between Students' Answers and the respective Model Answer may be a good quality-indicator. With such an analogy in mind, we applied basic IR techniques in the context of automatic assessment and discussed our findings. In this paper, we explain the development of a web-based automatic assessment system that incorporates 5 different text analysis techniques for automatic assessment of narrative answers using vector space framework. We apply Uni-gram, Bi-gram, TF.IDF, Keyphrase Extraction, and Keyphrase with Synonym Resolution before representing model answers and students' answers as document vectors; and then we compute document similarity scores. The experimental results based on 30 narrative questions with 30 model answers, and 300 student's answers (from 10 students) show that the correlation of automatic assessment with human assessment is higher when advanced text processing techniques such as Keyphrase Extraction and Synonym Resolution are applied.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
信息检索技术能否自动评估挑战?
在信息检索(Information Retrieval, IR)中,计算查询与一组文档之间的相似度分数,并根据相似度分数对相关文档进行排序。在计算文档相似度评分时,IR系统通常将查询视为只包含几个单词的短文档。在记叙性答案的计算机辅助评估(CAA)中,当模型答案可用时,学生答案与各自的模型答案之间的相似度得分可能是一个很好的质量指标。考虑到这样的类比,我们在自动评估的背景下应用了基本的红外技术,并讨论了我们的发现。在本文中,我们解释了基于web的自动评估系统的开发,该系统结合了5种不同的文本分析技术,用于使用向量空间框架自动评估叙述性答案。我们使用一元,双元,TF。在将模型答案和学生答案表示为文档向量之前进行IDF、关键词提取和关键词同义词解析然后我们计算文档相似度分数。基于30个叙事题、30个模型答案和300个学生答案(来自10名学生)的实验结果表明,当采用关键词提取和同义词解析等高级文本处理技术时,自动评估与人工评估的相关性更高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Content clustering of Computer Mediated Courseware using data mining technique An audible Bangla text-entry method in Mobile phones with intelligent keypad Design of meandering probe fed microstrip patch antenna for wireless communication system Can Information Retrieval techniques automatic assessment challenges? Logical clock based Last Update Consistency model for Distributed Shared Memory
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1