An assessment database for supporting educational research

M. Urban-Lurain, Diane Ebert-May, Jennifer L. Momsen, Ryan L. McFall, Matthew B. Jones, B. Leinfelder, J. Sticklen
{"title":"An assessment database for supporting educational research","authors":"M. Urban-Lurain, Diane Ebert-May, Jennifer L. Momsen, Ryan L. McFall, Matthew B. Jones, B. Leinfelder, J. Sticklen","doi":"10.1109/FIE.2009.5350595","DOIUrl":null,"url":null,"abstract":"One of the challenges of research in science education is storing, managing and querying the large amounts of diverse student assessment data that are typically collected in many Science, Technology, Engineering and Mathematics (STEM) courses. Furthermore, longitudinal studies across courses and ABET accreditation necessitate tracking students throughout their academic programs in which each course will have different types of data. Researchers need to manage, assign metadata to, merge, sort, and query all of these data to support instructional decisions, research and accreditation. To address these needs we have constructed a database to support both data-driven instructional decision making and research in STEM education. We have built upon existing metadata standards to define an extensible Educational Metadata Language (EdML) that enables assessments to be tagged based on taxonomies, standard psychometrics such as difficulty and discrimination, and other data to facilitate cross-study analyses. Once a collection of assessment data are available, faculty can examine their assessment data to evaluate historical trends, analyze the effectiveness of pedagogical techniques and strategies, or compare the performance of different teaching and assessment techniques within their course or across institutions.","PeriodicalId":129330,"journal":{"name":"2009 39th IEEE Frontiers in Education Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 39th IEEE Frontiers in Education Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FIE.2009.5350595","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

One of the challenges of research in science education is storing, managing and querying the large amounts of diverse student assessment data that are typically collected in many Science, Technology, Engineering and Mathematics (STEM) courses. Furthermore, longitudinal studies across courses and ABET accreditation necessitate tracking students throughout their academic programs in which each course will have different types of data. Researchers need to manage, assign metadata to, merge, sort, and query all of these data to support instructional decisions, research and accreditation. To address these needs we have constructed a database to support both data-driven instructional decision making and research in STEM education. We have built upon existing metadata standards to define an extensible Educational Metadata Language (EdML) that enables assessments to be tagged based on taxonomies, standard psychometrics such as difficulty and discrimination, and other data to facilitate cross-study analyses. Once a collection of assessment data are available, faculty can examine their assessment data to evaluate historical trends, analyze the effectiveness of pedagogical techniques and strategies, or compare the performance of different teaching and assessment techniques within their course or across institutions.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
支持教育研究的评估数据库
科学教育研究面临的挑战之一是存储、管理和查询大量不同的学生评估数据,这些数据通常是在许多科学、技术、工程和数学(STEM)课程中收集的。此外,跨课程和ABET认证的纵向研究需要在整个学术课程中跟踪学生,其中每门课程都有不同类型的数据。研究人员需要管理、分配元数据、合并、排序和查询所有这些数据,以支持教学决策、研究和认证。为了满足这些需求,我们构建了一个数据库,以支持数据驱动的教学决策和STEM教育的研究。我们建立在现有元数据标准的基础上,定义了一个可扩展的教育元数据语言(EdML),使评估能够基于分类法、标准心理测量(如难度和歧视)和其他数据进行标记,以促进交叉研究分析。一旦收集了评估数据,教师就可以检查他们的评估数据来评估历史趋势,分析教学技术和策略的有效性,或者比较不同教学和评估技术在本课程或跨机构中的表现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Work in progress - use of a system for evaluation of learning and formative feedback in fluid mechanics Special session - developing engineering student's philosophical inquiry skills The business of Service Learning Work in progress: iComb Project - a mathematical widget for teaching and learning combinatorics through exercises Work in progress - integrating energy issues and technologies into an electrical or computer engineering curriculum
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1