首页 > 最新文献

Proceedings of the Tenth International Conference on Learning Analytics & Knowledge最新文献

英文 中文
Quantifying data sensitivity: precise demonstration of care when building student prediction models 量化数据敏感性:在建立学生预测模型时精确地展示了谨慎
Charles Lang, Charlotte Woo, Jeanne Sinclair
Until recently an assumption within the predictive modelling community has been that collecting more student data is always better. But in reaction to recent high profile data privacy scandals, many educators, scholars, students and administrators have been questioning the ethics of such a strategy. Suggestions are growing that the minimum amount of data should be collected to aid the function for which a prediction is being made. Yet, machine learning algorithms are primarily judged on metrics derived from prediction accuracy or whether they meet probabilistic criteria for significance. They are not routinely judged on whether they utilize the minimum number of the least sensitive features, preserving what we name here as data collection parsimony. We believe the ability to assess data collection parsimony would be a valuable addition to the suite of evaluations for any prediction strategy and to that end, the following paper provides an introduction to data collection parsimony, describes a novel method for quantifying the concept using empirical Bayes estimates and then tests the metric on real world data. Both theoretical and empirical benefits and limitations of this method are discussed. We conclude that for the purpose of model building this metric is superior to others in several ways, but there are some hurdles to effective implementation.
直到最近,预测建模界的一个假设是,收集更多的学生数据总是更好。但由于最近备受关注的数据隐私丑闻,许多教育工作者、学者、学生和管理人员一直在质疑这种策略的伦理性。越来越多的人建议,应该收集最少数量的数据,以帮助进行预测的功能。然而,机器学习算法主要是根据预测准确性或是否满足显著性的概率标准来判断的。它们通常不会根据它们是否利用了最少数量的最不敏感的特征来判断,从而保留了我们在这里所说的数据收集节俭。我们相信评估数据收集简约性的能力将是任何预测策略评估套件的一个有价值的补充,为此,下面的论文提供了数据收集简约性的介绍,描述了一种使用经验贝叶斯估计量化概念的新方法,然后在现实世界的数据上测试度量。讨论了该方法的理论和经验优势以及局限性。我们得出结论,为了模型构建的目的,这个指标在几个方面优于其他指标,但是在有效实现方面存在一些障碍。
{"title":"Quantifying data sensitivity: precise demonstration of care when building student prediction models","authors":"Charles Lang, Charlotte Woo, Jeanne Sinclair","doi":"10.1145/3375462.3375506","DOIUrl":"https://doi.org/10.1145/3375462.3375506","url":null,"abstract":"Until recently an assumption within the predictive modelling community has been that collecting more student data is always better. But in reaction to recent high profile data privacy scandals, many educators, scholars, students and administrators have been questioning the ethics of such a strategy. Suggestions are growing that the minimum amount of data should be collected to aid the function for which a prediction is being made. Yet, machine learning algorithms are primarily judged on metrics derived from prediction accuracy or whether they meet probabilistic criteria for significance. They are not routinely judged on whether they utilize the minimum number of the least sensitive features, preserving what we name here as data collection parsimony. We believe the ability to assess data collection parsimony would be a valuable addition to the suite of evaluations for any prediction strategy and to that end, the following paper provides an introduction to data collection parsimony, describes a novel method for quantifying the concept using empirical Bayes estimates and then tests the metric on real world data. Both theoretical and empirical benefits and limitations of this method are discussed. We conclude that for the purpose of model building this metric is superior to others in several ways, but there are some hurdles to effective implementation.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"291 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114383535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Towards automated analysis of cognitive presence in MOOC discussions: a manual classification study 面向MOOC讨论中认知存在的自动化分析:人工分类研究
Yuanyuan Hu, C. Donald, Nasser Giacaman, Zexuan Zhu
This paper reports on early stages of a machine learning research project, where phases of cognitive presence in MOOC discussions were manually coded in preparation for training automated cognitive classifiers. We present a manual-classification rubric combining Garrison, Anderson and Archer's [11] coding scheme with Park's [25] revised version for a target MOOC. The inter-rater reliability between two raters achieved 95.4% agreement with a Cohen's weighted kappa of 0.96, demonstrating our classification rubric is plausible for the target MOOC dataset. The classification rubric, originally intended for for-credit, undergraduate courses, can be applied to a MOOC context. We found that the main disagreements between two raters lay on adjacent cognitive phases, implying that additional categories may exist between cognitive phases in such MOOC discussion messages. Overall, our results suggest a reliable rubric for classifying cognitive phases in discussion messages of the target MOOC by two raters. This indicates we are in a position to apply machine learning algorithms which can also cater for data with inter-rater disagreements in future automated classification studies.
本文报告了一个机器学习研究项目的早期阶段,在该项目中,MOOC讨论中的认知存在阶段被手工编码,为训练自动认知分类器做准备。我们将Garrison、Anderson和Archer的[11]编码方案与Park的[25]修订版结合起来,提出了一种针对目标MOOC的手动分类规则。两个评分者之间的评分信度达到95.4%,科恩加权kappa为0.96,表明我们的分类规则对于目标MOOC数据集是合理的。最初用于学分制本科课程的分类标准可以应用于MOOC。我们发现,两个评分者之间的主要分歧在于相邻的认知阶段,这意味着在此类MOOC讨论信息的认知阶段之间可能存在其他类别。总体而言,我们的研究结果提出了一个可靠的标准来分类目标MOOC讨论信息中的认知阶段。这表明我们可以应用机器学习算法,这些算法也可以在未来的自动分类研究中满足内部差异的数据。
{"title":"Towards automated analysis of cognitive presence in MOOC discussions: a manual classification study","authors":"Yuanyuan Hu, C. Donald, Nasser Giacaman, Zexuan Zhu","doi":"10.1145/3375462.3375473","DOIUrl":"https://doi.org/10.1145/3375462.3375473","url":null,"abstract":"This paper reports on early stages of a machine learning research project, where phases of cognitive presence in MOOC discussions were manually coded in preparation for training automated cognitive classifiers. We present a manual-classification rubric combining Garrison, Anderson and Archer's [11] coding scheme with Park's [25] revised version for a target MOOC. The inter-rater reliability between two raters achieved 95.4% agreement with a Cohen's weighted kappa of 0.96, demonstrating our classification rubric is plausible for the target MOOC dataset. The classification rubric, originally intended for for-credit, undergraduate courses, can be applied to a MOOC context. We found that the main disagreements between two raters lay on adjacent cognitive phases, implying that additional categories may exist between cognitive phases in such MOOC discussion messages. Overall, our results suggest a reliable rubric for classifying cognitive phases in discussion messages of the target MOOC by two raters. This indicates we are in a position to apply machine learning algorithms which can also cater for data with inter-rater disagreements in future automated classification studies.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115136615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Testing the reliability of inter-rater reliability 测试间信度的信度
Brendan R. Eagan, Jais Brohinsky, Jingyi Wang, D. Shaffer
Analyses of learning often rely on coded data. One important aspect of coding is establishing reliability. Previous research has shown that the common approach for establishing coding reliability is seriously flawed in that it produces unacceptably high Type I error rates. This paper focuses on testing whether or not these error rates correspond to specific reliability metrics or a larger methodological problem. Our results show that the method for establishing reliability is not metric specific, and we suggest the adoption of new practices to control Type I error rates associated with establishing coding reliability.
学习分析通常依赖于编码数据。编码的一个重要方面是建立可靠性。先前的研究表明,建立编码可靠性的常用方法存在严重缺陷,因为它产生了不可接受的高I型错误率。本文的重点是测试这些错误率是否对应于特定的可靠性度量或更大的方法问题。我们的结果表明,建立可靠性的方法不是特定于度量的,我们建议采用新的实践来控制与建立编码可靠性相关的第一类错误率。
{"title":"Testing the reliability of inter-rater reliability","authors":"Brendan R. Eagan, Jais Brohinsky, Jingyi Wang, D. Shaffer","doi":"10.1145/3375462.3375508","DOIUrl":"https://doi.org/10.1145/3375462.3375508","url":null,"abstract":"Analyses of learning often rely on coded data. One important aspect of coding is establishing reliability. Previous research has shown that the common approach for establishing coding reliability is seriously flawed in that it produces unacceptably high Type I error rates. This paper focuses on testing whether or not these error rates correspond to specific reliability metrics or a larger methodological problem. Our results show that the method for establishing reliability is not metric specific, and we suggest the adoption of new practices to control Type I error rates associated with establishing coding reliability.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129059772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
The automated grading of student open responses in mathematics 学生数学开放性回答的自动评分
John A. Erickson, Anthony F. Botelho, Steven McAteer, A. Varatharaj, N. Heffernan
The use of computer-based systems in classrooms has provided teachers with new opportunities in delivering content to students, supplementing instruction, and assessing student knowledge and comprehension. Among the largest benefits of these systems is their ability to provide students with feedback on their work and also report student performance and progress to their teacher. While computer-based systems can automatically assess student answers to a range of question types, a limitation faced by many systems is in regard to open-ended problems. Many systems are either unable to provide support for open-ended problems, relying on the teacher to grade them manually, or avoid such question types entirely. Due to recent advancements in natural language processing methods, the automation of essay grading has made notable strides. However, much of this research has pertained to domains outside of mathematics, where the use of open-ended problems can be used by teachers to assess students' understanding of mathematical concepts beyond what is possible on other types of problems. This research explores the viability and challenges of developing automated graders of open-ended student responses in mathematics. We further explore how the scale of available data impacts model performance. Focusing on content delivered through the ASSISTments online learning platform, we present a set of analyses pertaining to the development and evaluation of models to predict teacher-assigned grades for student open responses.
在教室中使用基于计算机的系统为教师提供了向学生传授内容、补充教学以及评估学生知识和理解能力的新机会。这些系统的最大好处之一是它们能够为学生提供关于他们的工作的反馈,并向老师报告学生的表现和进步。虽然基于计算机的系统可以自动评估学生对一系列问题类型的答案,但许多系统面临的限制是关于开放式问题。许多系统要么无法为开放式问题提供支持,依靠老师手动评分,要么完全避免这类问题。由于自然语言处理方法的最新进展,论文评分的自动化取得了显着的进步。然而,这方面的许多研究都与数学以外的领域有关,在这些领域中,教师可以使用开放式问题来评估学生对数学概念的理解,而不是在其他类型的问题上。本研究探讨了开发开放式学生数学反应自动评分的可行性和挑战。我们进一步探讨了可用数据的规模如何影响模型性能。专注于通过ASSISTments在线学习平台交付的内容,我们提出了一组与模型开发和评估有关的分析,以预测学生公开回答的教师分配分数。
{"title":"The automated grading of student open responses in mathematics","authors":"John A. Erickson, Anthony F. Botelho, Steven McAteer, A. Varatharaj, N. Heffernan","doi":"10.1145/3375462.3375523","DOIUrl":"https://doi.org/10.1145/3375462.3375523","url":null,"abstract":"The use of computer-based systems in classrooms has provided teachers with new opportunities in delivering content to students, supplementing instruction, and assessing student knowledge and comprehension. Among the largest benefits of these systems is their ability to provide students with feedback on their work and also report student performance and progress to their teacher. While computer-based systems can automatically assess student answers to a range of question types, a limitation faced by many systems is in regard to open-ended problems. Many systems are either unable to provide support for open-ended problems, relying on the teacher to grade them manually, or avoid such question types entirely. Due to recent advancements in natural language processing methods, the automation of essay grading has made notable strides. However, much of this research has pertained to domains outside of mathematics, where the use of open-ended problems can be used by teachers to assess students' understanding of mathematical concepts beyond what is possible on other types of problems. This research explores the viability and challenges of developing automated graders of open-ended student responses in mathematics. We further explore how the scale of available data impacts model performance. Focusing on content delivered through the ASSISTments online learning platform, we present a set of analyses pertaining to the development and evaluation of models to predict teacher-assigned grades for student open responses.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121188186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
What college students say, and what they do: aligning self-regulated learning theory with behavioral logs 大学生说什么,做什么:将自我调节学习理论与行为日志结合起来
Joshua Quick, Benjamin A. Motz, Jamie Israel, Jason Kaetzel
A central concern in learning analytics specifically and educational research more generally is the alignment of robust, coherent measures to well-developed conceptual and theoretical frameworks. Capturing and representing processes of learning remains an ongoing challenge in all areas of educational inquiry and presents substantive considerations on the nature of learning, knowledge, and assessment & measurement that have been continuously refined in various areas of education and pedagogical practice. Learning analytics as a still developing method of inquiry has yet to substantively navigate the alignment of measurement, capture, and representation of learning to theoretical frameworks despite being used to identify various practical concerns such as at risk students. This study seeks to address these concerns by comparing behavioral measurements from learning management systems to established measurements of components of learning as understood through self-regulated learning frameworks. Using several prominent and robustly supported self-reported survey measures designed to identify dimensions of self-regulated learning, as well as typical behavioral features extracted from a learning management system, we conducted descriptive and exploratory analyses on the relational structures of these data. With the exception of learners' self-reported time management strategies and level of motivation, the current results indicate that behavioral measures were not well correlated with survey measurements. Possibilities and recommendations for learning analytics as measurements for self-regulated learning are discussed.
学习分析学和教育研究的一个核心问题是将稳健、连贯的措施与发达的概念和理论框架结合起来。在教育探究的所有领域中,捕捉和表现学习过程仍然是一个持续的挑战,并对学习、知识、评估和测量的本质提出了实质性的考虑,这些考虑在教育和教学实践的各个领域中不断得到完善。学习分析作为一种仍在发展的探究方法,尽管被用于识别各种实际问题,如有风险的学生,但它尚未实质性地引导学习的测量、捕获和表示与理论框架的对齐。本研究试图通过比较学习管理系统的行为测量与通过自我调节学习框架理解的学习组成部分的既定测量来解决这些问题。我们使用了几个突出的、得到有力支持的自我报告调查方法,旨在确定自我调节学习的维度,以及从学习管理系统中提取的典型行为特征,对这些数据的关系结构进行了描述性和探索性分析。除了学习者自我报告的时间管理策略和动机水平外,目前的研究结果表明,行为测量与调查测量结果没有很好的相关性。讨论了学习分析作为自我调节学习测量的可能性和建议。
{"title":"What college students say, and what they do: aligning self-regulated learning theory with behavioral logs","authors":"Joshua Quick, Benjamin A. Motz, Jamie Israel, Jason Kaetzel","doi":"10.1145/3375462.3375516","DOIUrl":"https://doi.org/10.1145/3375462.3375516","url":null,"abstract":"A central concern in learning analytics specifically and educational research more generally is the alignment of robust, coherent measures to well-developed conceptual and theoretical frameworks. Capturing and representing processes of learning remains an ongoing challenge in all areas of educational inquiry and presents substantive considerations on the nature of learning, knowledge, and assessment & measurement that have been continuously refined in various areas of education and pedagogical practice. Learning analytics as a still developing method of inquiry has yet to substantively navigate the alignment of measurement, capture, and representation of learning to theoretical frameworks despite being used to identify various practical concerns such as at risk students. This study seeks to address these concerns by comparing behavioral measurements from learning management systems to established measurements of components of learning as understood through self-regulated learning frameworks. Using several prominent and robustly supported self-reported survey measures designed to identify dimensions of self-regulated learning, as well as typical behavioral features extracted from a learning management system, we conducted descriptive and exploratory analyses on the relational structures of these data. With the exception of learners' self-reported time management strategies and level of motivation, the current results indicate that behavioral measures were not well correlated with survey measurements. Possibilities and recommendations for learning analytics as measurements for self-regulated learning are discussed.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115498786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Design of a curriculum analytics tool to support continuous improvement processes in higher education 课程分析工具的设计,以支持高等教育的持续改进过程
Isabel Hilliger, Camila Aguirre, Constanza Miranda, S. Celis, M. Pérez-Sanagustín
Curriculum analytics (CA) emerged as a sub-field of learning analytics, aiming to use evidence to drive curriculum decision-making and program improvement. However, its overall impact on program outcomes remains unknown. In this context, this paper presents work-in-progress of a large research project to understand how CA could support continuous improvement processes at a program-level. We followed an approach based on design-based research to develop a CA tool: The Integrative Learning Design Framework. This paper describes three out of four phases of this framework and its main results, including the evaluation of the local impact of this CA tool. This evaluation consisted of an instrumental case study to evaluate its use to support 124 teaching staff in a 3-year continuous improvement process in a Latin American university. Lessons learned indicate that the tool helped staff to collect information for curriculum discussions, facilitating the availability of evidence regarding student competency attainment. To generalize these lessons, future work will consist of evaluating the tool in different university settings.
课程分析(CA)作为学习分析的一个子领域出现,旨在使用证据来推动课程决策和项目改进。然而,它对项目结果的总体影响仍然未知。在此背景下,本文介绍了一个大型研究项目的正在进行的工作,以了解CA如何在项目级别支持持续改进过程。我们采用基于设计的研究方法开发了一个CA工具:综合学习设计框架。本文描述了该框架的四个阶段中的三个阶段及其主要结果,包括对该CA工具的局部影响的评估。该评价包括一项工具性案例研究,以评估其在一所拉丁美洲大学的3年持续改进过程中为124名教学人员提供支持的情况。经验表明,该工具有助于工作人员为课程讨论收集信息,促进有关学生能力获得的证据的可用性。为了总结这些经验教训,未来的工作将包括在不同的大学环境中评估该工具。
{"title":"Design of a curriculum analytics tool to support continuous improvement processes in higher education","authors":"Isabel Hilliger, Camila Aguirre, Constanza Miranda, S. Celis, M. Pérez-Sanagustín","doi":"10.1145/3375462.3375489","DOIUrl":"https://doi.org/10.1145/3375462.3375489","url":null,"abstract":"Curriculum analytics (CA) emerged as a sub-field of learning analytics, aiming to use evidence to drive curriculum decision-making and program improvement. However, its overall impact on program outcomes remains unknown. In this context, this paper presents work-in-progress of a large research project to understand how CA could support continuous improvement processes at a program-level. We followed an approach based on design-based research to develop a CA tool: The Integrative Learning Design Framework. This paper describes three out of four phases of this framework and its main results, including the evaluation of the local impact of this CA tool. This evaluation consisted of an instrumental case study to evaluate its use to support 124 teaching staff in a 3-year continuous improvement process in a Latin American university. Lessons learned indicate that the tool helped staff to collect information for curriculum discussions, facilitating the availability of evidence regarding student competency attainment. To generalize these lessons, future work will consist of evaluating the tool in different university settings.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121647644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Evaluating teachers' perceptions of students' questions organization 评估教师对学生问题组织的看法
Fatima Harrak, François Bouchet, Vanda Luengo, P. Gillois
Students' questions are essential to help teachers in assessing their understanding and adapting their pedagogy. However, in a flipped classroom context where many questions are asked online to be addressed in class, selecting questions can be difficult for teachers. To help them in this task, we present here three alternative ways of organizing questions: one based on pedagogical needs, one based on estimated students' profiles and one mixing both approaches. Results of a survey filled by 37 teachers in a flipped classroom pedagogy show no consensus over a single organization. A cluster analysis based on teachers' flipped classroom experience allowed us to distinguish two profiles, but they were not associated with any particular question organization preference. Qualitative results suggest the need for different organizations may rely more on a pedagogical philosophy and advocates for differentiated dashboards.
学生的问题对于帮助教师评估他们的理解和调整他们的教学方法至关重要。然而,在翻转课堂的背景下,许多问题都是在网上提出的,在课堂上解决,选择问题对教师来说可能很困难。为了帮助他们完成这项任务,我们在这里提出了三种组织问题的替代方法:一种基于教学需求,一种基于估计的学生概况,还有一种混合了两种方法。一项由37名教师参与的翻转课堂教学法的调查结果显示,对单一组织没有达成共识。基于教师翻转课堂经验的聚类分析使我们能够区分两种概况,但它们与任何特定的问题组织偏好无关。定性结果表明,对不同组织的需求可能更多地依赖于一种教学理念,并倡导差异化的仪表板。
{"title":"Evaluating teachers' perceptions of students' questions organization","authors":"Fatima Harrak, François Bouchet, Vanda Luengo, P. Gillois","doi":"10.1145/3375462.3375509","DOIUrl":"https://doi.org/10.1145/3375462.3375509","url":null,"abstract":"Students' questions are essential to help teachers in assessing their understanding and adapting their pedagogy. However, in a flipped classroom context where many questions are asked online to be addressed in class, selecting questions can be difficult for teachers. To help them in this task, we present here three alternative ways of organizing questions: one based on pedagogical needs, one based on estimated students' profiles and one mixing both approaches. Results of a survey filled by 37 teachers in a flipped classroom pedagogy show no consensus over a single organization. A cluster analysis based on teachers' flipped classroom experience allowed us to distinguish two profiles, but they were not associated with any particular question organization preference. Qualitative results suggest the need for different organizations may rely more on a pedagogical philosophy and advocates for differentiated dashboards.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134448120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analyzing the consistency in within-activity learning patterns in blended learning 混合学习中活动内学习模式的一致性分析
Varshita Sher, M. Hatala, D. Gašević
Performance and consistency play a large role in learning. This study analyzes the relation between consistency in students' online work habits and academic performance in a blended course. We utilize the data from logs recorded by a learning management system (LMS) in two information technology courses. The two courses required the completion of monthly asynchronous online discussion tasks and weekly assignments, respectively. We measure consistency by using Data Time Warping (DTW) distance for two successive tasks (assignments or discussions), as an appropriate measure to assess similarity of time series, over 11-day timeline starting 10 days before and up to the submission deadline. We found meaningful clusters of students exhibiting similar behavior and we use these to identify three distinct consistency patterns: highly consistent, incrementally consistent, and inconsistent users. We also found evidence of significant associations between these patterns and learner's academic performance.
表现和一致性在学习中起着很大的作用。本研究分析了混合式课程中学生在线学习习惯的一致性与学习成绩的关系。我们利用学习管理系统(LMS)在两个信息技术课程中记录的日志数据。这两门课程分别要求学生完成每月的异步在线讨论任务和每周的作业。我们通过使用两个连续任务(作业或讨论)的数据时间扭曲(DTW)距离来衡量一致性,作为评估时间序列相似性的适当措施,从提交截止日期前10天开始的11天时间轴。我们发现有意义的学生群表现出相似的行为,我们用这些来识别三种不同的一致性模式:高度一致、增量一致和不一致的用户。我们还发现了这些模式与学习者的学习成绩之间存在显著关联的证据。
{"title":"Analyzing the consistency in within-activity learning patterns in blended learning","authors":"Varshita Sher, M. Hatala, D. Gašević","doi":"10.1145/3375462.3375470","DOIUrl":"https://doi.org/10.1145/3375462.3375470","url":null,"abstract":"Performance and consistency play a large role in learning. This study analyzes the relation between consistency in students' online work habits and academic performance in a blended course. We utilize the data from logs recorded by a learning management system (LMS) in two information technology courses. The two courses required the completion of monthly asynchronous online discussion tasks and weekly assignments, respectively. We measure consistency by using Data Time Warping (DTW) distance for two successive tasks (assignments or discussions), as an appropriate measure to assess similarity of time series, over 11-day timeline starting 10 days before and up to the submission deadline. We found meaningful clusters of students exhibiting similar behavior and we use these to identify three distinct consistency patterns: highly consistent, incrementally consistent, and inconsistent users. We also found evidence of significant associations between these patterns and learner's academic performance.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129284635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Constructing and predicting school advice for academic achievement: a comparison of item response theory and machine learning techniques 构建和预测学校对学业成绩的建议:项目反应理论和机器学习技术的比较
Koen Niemeijer, R. Feskens, G. Krempl, J. Koops, Matthieu J. S. Brinkhuis
Educational tests can be used to estimate pupils' abilities and thereby give an indication of whether their school type is suitable for them. However, tests in education are usually conducted for each content area separately which makes it difficult to combine these results into one single school advice. To help with school advice, we provide a comparison between both domain-specific and domain-agnostic methods for predicting school types. Both use data from a pupil monitoring system in the Netherlands, a system that keeps track of pupils' educational progress over several years by a series of tests measuring multiple skills. A domain-specific item response theory (IRT) model is calibrated from which an ability score is extracted and is subsequently plugged into a multinomial log-linear regression model. Second, we train domain-agnostic machine learning (ML) models. These are a random forest (RF) and a shallow neural network (NN). Furthermore, we apply case weighting to give extra attention to pupils who switched between school types. When considering the performance of all pupils, RFs provided the most accurate predictions followed by NNs and IRT respectively. When only looking at the performance of pupils who switched school type, IRT performed best followed by NNs and RFs. Case weighting proved to provide a major improvement for this group. Lastly, IRT was found to be much easier to explain in comparison to the other models. Thus, while ML provided more accurate results, this comes at the cost of a lower explainability in comparison to IRT.
教育考试可以用来估计学生的能力,从而表明他们的学校类型是否适合他们。然而,教育方面的考试通常是针对每个内容领域单独进行的,这使得很难将这些结果合并到一个单一的学校建议中。为了帮助提供学校建议,我们提供了预测学校类型的领域特定方法和领域不可知方法之间的比较。两者都使用来自荷兰学生监控系统的数据,该系统通过一系列测试多种技能来跟踪学生几年来的教育进展。一个领域特定的项目反应理论(IRT)模型被校准,从中提取能力分数,并随后插入到多项对数线性回归模型中。其次,我们训练与领域无关的机器学习(ML)模型。它们是随机森林(RF)和浅层神经网络(NN)。此外,我们应用案例加权来给予那些在学校类型之间切换的学生额外的关注。在考虑所有学生的表现时,RFs提供了最准确的预测,其次是神经网络和IRT。当只观察转换学校类型的学生的表现时,IRT表现最好,其次是nn和RFs。案例加权被证明为这一组提供了重大改进。最后,与其他模型相比,发现IRT更容易解释。因此,虽然ML提供了更准确的结果,但与IRT相比,这是以较低的可解释性为代价的。
{"title":"Constructing and predicting school advice for academic achievement: a comparison of item response theory and machine learning techniques","authors":"Koen Niemeijer, R. Feskens, G. Krempl, J. Koops, Matthieu J. S. Brinkhuis","doi":"10.1145/3375462.3375486","DOIUrl":"https://doi.org/10.1145/3375462.3375486","url":null,"abstract":"Educational tests can be used to estimate pupils' abilities and thereby give an indication of whether their school type is suitable for them. However, tests in education are usually conducted for each content area separately which makes it difficult to combine these results into one single school advice. To help with school advice, we provide a comparison between both domain-specific and domain-agnostic methods for predicting school types. Both use data from a pupil monitoring system in the Netherlands, a system that keeps track of pupils' educational progress over several years by a series of tests measuring multiple skills. A domain-specific item response theory (IRT) model is calibrated from which an ability score is extracted and is subsequently plugged into a multinomial log-linear regression model. Second, we train domain-agnostic machine learning (ML) models. These are a random forest (RF) and a shallow neural network (NN). Furthermore, we apply case weighting to give extra attention to pupils who switched between school types. When considering the performance of all pupils, RFs provided the most accurate predictions followed by NNs and IRT respectively. When only looking at the performance of pupils who switched school type, IRT performed best followed by NNs and RFs. Case weighting proved to provide a major improvement for this group. Lastly, IRT was found to be much easier to explain in comparison to the other models. Thus, while ML provided more accurate results, this comes at the cost of a lower explainability in comparison to IRT.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120935265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Let's shine together!: a comparative study between learning analytics and educational data mining 让我们一起发光吧!:学习分析与教育数据挖掘的比较研究
Guanliang Chen, V. Rolim, R. F. Mello, D. Gašević
Learning Analytics and Knowledge (LAK) and Educational Data Mining (EDM) are two of the most popular venues for researchers and practitioners to report and disseminate discoveries in data-intensive research on technology-enhanced education. After the development of about a decade, it is time to scrutinize and compare these two venues. By doing this, we expected to inform relevant stakeholders of a better understanding of the past development of LAK and EDM and provide suggestions for their future development. Specifically, we conducted an extensive comparison analysis between LAK and EDM from four perspectives, including (i) the topics investigated; (ii) community development; (iii) community diversity; and (iv) research impact. Furthermore, we applied one of the most widely-used language modeling techniques (Word2Vec) to capture words used frequently by researchers to describe future works that can be pursued by building upon suggestions made in the published papers to shed light on potential directions for future research.
学习分析与知识(LAK)和教育数据挖掘(EDM)是研究人员和从业者报告和传播技术增强教育的数据密集型研究发现的两个最受欢迎的场所。经过大约十年的发展,是时候仔细研究和比较这两个场所了。通过这样做,我们希望让相关利益相关者更好地了解LAK和EDM的过去发展,并为它们的未来发展提供建议。具体而言,我们从四个角度对LAK和EDM进行了广泛的比较分析,包括(i)调查的主题;社区发展;(iii)社区多样性;(四)研究影响。此外,我们应用了最广泛使用的语言建模技术之一(Word2Vec)来捕获研究人员经常使用的词汇,以描述未来的工作,这些工作可以通过建立在已发表的论文中提出的建议来实现,从而揭示未来研究的潜在方向。
{"title":"Let's shine together!: a comparative study between learning analytics and educational data mining","authors":"Guanliang Chen, V. Rolim, R. F. Mello, D. Gašević","doi":"10.1145/3375462.3375500","DOIUrl":"https://doi.org/10.1145/3375462.3375500","url":null,"abstract":"Learning Analytics and Knowledge (LAK) and Educational Data Mining (EDM) are two of the most popular venues for researchers and practitioners to report and disseminate discoveries in data-intensive research on technology-enhanced education. After the development of about a decade, it is time to scrutinize and compare these two venues. By doing this, we expected to inform relevant stakeholders of a better understanding of the past development of LAK and EDM and provide suggestions for their future development. Specifically, we conducted an extensive comparison analysis between LAK and EDM from four perspectives, including (i) the topics investigated; (ii) community development; (iii) community diversity; and (iv) research impact. Furthermore, we applied one of the most widely-used language modeling techniques (Word2Vec) to capture words used frequently by researchers to describe future works that can be pursued by building upon suggestions made in the published papers to shed light on potential directions for future research.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115999565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
期刊
Proceedings of the Tenth International Conference on Learning Analytics & Knowledge
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1