使用不同的前后测试,量化网络课程材料的学习情况

P. Steif, M. Lovett, A. Dollár
{"title":"使用不同的前后测试,量化网络课程材料的学习情况","authors":"P. Steif, M. Lovett, A. Dollár","doi":"10.1109/FIE.2012.6462255","DOIUrl":null,"url":null,"abstract":"Engineering instructors seek to gauge the effectiveness of their instruction. One gauge has been to use standardized tests, such as concept inventories and to quantify learning as the change in score over the semester. Here we question whether that approach is always the best practice for gauging the effect of instruction, and we propose an alternative of administering different tests at the start and end of the semester. In particular, to gauge the influence of one aspect of instruction, the use of interactive web-based course materials that had been developed for Statics, we administered the Force Concept Inventory at the start of the course, and the Statics Concept Inventory at the end of the course. Correlations and then linear regression were applied to study how conceptual knowledge measured at the end of the course depended on conceptual knowledge measured at the start and the amount of use of the web-based courseware. Usage of the web-based courseware was found to promote conceptual knowledge at the end of the course in a statistically significant way only after accounting for initial knowledge as judged by the different conceptual test administered at the start of the course. Thus, it is not necessary to measure gain on one test; instead each test should capture well the variation in relevant ability across students at the time the test is administered.","PeriodicalId":120268,"journal":{"name":"2012 Frontiers in Education Conference Proceedings","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2012-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Quantifying learning from web-based course materials using different pre and post tests\",\"authors\":\"P. Steif, M. Lovett, A. Dollár\",\"doi\":\"10.1109/FIE.2012.6462255\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Engineering instructors seek to gauge the effectiveness of their instruction. One gauge has been to use standardized tests, such as concept inventories and to quantify learning as the change in score over the semester. Here we question whether that approach is always the best practice for gauging the effect of instruction, and we propose an alternative of administering different tests at the start and end of the semester. In particular, to gauge the influence of one aspect of instruction, the use of interactive web-based course materials that had been developed for Statics, we administered the Force Concept Inventory at the start of the course, and the Statics Concept Inventory at the end of the course. Correlations and then linear regression were applied to study how conceptual knowledge measured at the end of the course depended on conceptual knowledge measured at the start and the amount of use of the web-based courseware. Usage of the web-based courseware was found to promote conceptual knowledge at the end of the course in a statistically significant way only after accounting for initial knowledge as judged by the different conceptual test administered at the start of the course. Thus, it is not necessary to measure gain on one test; instead each test should capture well the variation in relevant ability across students at the time the test is administered.\",\"PeriodicalId\":120268,\"journal\":{\"name\":\"2012 Frontiers in Education Conference Proceedings\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-10-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 Frontiers in Education Conference Proceedings\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/FIE.2012.6462255\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 Frontiers in Education Conference Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FIE.2012.6462255","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

工程讲师试图衡量他们的教学效果。一种衡量标准是使用标准化测试,如概念清单,并将学习作为一学期分数的变化来量化。在这里,我们质疑这种方法是否总是衡量教学效果的最佳实践,我们提出了在学期开始和结束时进行不同测试的替代方案。特别是,为了评估教学的一个方面的影响,使用为静力学开发的交互式网络课程材料,我们在课程开始时进行了力概念清单,在课程结束时进行了静力学概念清单。应用相关性和线性回归来研究课程结束时测量的概念知识如何依赖于课程开始时测量的概念知识和网络课件的使用量。在课程开始时通过不同的概念测试来判断初始知识后,我们发现网络课件的使用在课程结束时以统计显著的方式促进了概念知识的发展。因此,没有必要在一次测试中测量增益;相反,每个测试都应该很好地捕捉到学生在进行测试时相关能力的变化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Quantifying learning from web-based course materials using different pre and post tests
Engineering instructors seek to gauge the effectiveness of their instruction. One gauge has been to use standardized tests, such as concept inventories and to quantify learning as the change in score over the semester. Here we question whether that approach is always the best practice for gauging the effect of instruction, and we propose an alternative of administering different tests at the start and end of the semester. In particular, to gauge the influence of one aspect of instruction, the use of interactive web-based course materials that had been developed for Statics, we administered the Force Concept Inventory at the start of the course, and the Statics Concept Inventory at the end of the course. Correlations and then linear regression were applied to study how conceptual knowledge measured at the end of the course depended on conceptual knowledge measured at the start and the amount of use of the web-based courseware. Usage of the web-based courseware was found to promote conceptual knowledge at the end of the course in a statistically significant way only after accounting for initial knowledge as judged by the different conceptual test administered at the start of the course. Thus, it is not necessary to measure gain on one test; instead each test should capture well the variation in relevant ability across students at the time the test is administered.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Undergraduate hands-on senior capstone project experience on emerging mobile IPV6 technology A program to increase female engineering and science enrollments through NSF STEM scholarships Care ethics in engineering education: Undergraduate student perceptions of responsibility Work in progress: Out-of-class learning: Shaping perception of learning and building knowledge of IT professions Differentiating undergraduates from graduate student and faculty inventors
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1