Samuel Dodson, Ido Roll, Matthew Fong, Dongwook Yoon, N. M. Harandi, S. Fels
Video-based learning is most effective when students are engaged with video content; however, the literature has yet to identify students' viewing behaviors and ground them in theory. This paper addresses this need by introducing a framework of active viewing, which is situated in an established model of active learning to describe students' behaviors while learning from video. We conducted a field study with 460 undergraduates in an Applied Science course using a video player designed for active viewing to evaluate how students engage in passive and active video-based learning. The concept of active viewing, and the role of interactive, constructive, active, and passive behaviors in video-based learning, can be implemented in the design and evaluation of video players.
{"title":"An active viewing framework for video-based learning","authors":"Samuel Dodson, Ido Roll, Matthew Fong, Dongwook Yoon, N. M. Harandi, S. Fels","doi":"10.1145/3231644.3231682","DOIUrl":"https://doi.org/10.1145/3231644.3231682","url":null,"abstract":"Video-based learning is most effective when students are engaged with video content; however, the literature has yet to identify students' viewing behaviors and ground them in theory. This paper addresses this need by introducing a framework of active viewing, which is situated in an established model of active learning to describe students' behaviors while learning from video. We conducted a field study with 460 undergraduates in an Applied Science course using a video player designed for active viewing to evaluate how students engage in passive and active video-based learning. The concept of active viewing, and the role of interactive, constructive, active, and passive behaviors in video-based learning, can be implemented in the design and evaluation of video players.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81229048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Penghe Chen, Yu Lu, V. Zheng, Xiyang Chen, Xiaoqing Li
Motivated by the pressing need of educational applications with knowledge graph, we develop a system, called K12EduKG, to automatically construct knowledge graphs for K-12 educational subjects. Leveraging on heterogeneous domain-specific educational data, K12EduKG extracts educational concepts and identifies implicit relations with high educational significance. More specifically, it adopts named entity recognition (NER) techniques on educational data like curriculum standards to extract educational concepts, and employs data mining techniques to identify the cognitive prerequisite relations between educational concepts. In this paper, we present details of K12EduKG and demonstrate it with a knowledge graph constructed for the subject of mathematics.
{"title":"An automatic knowledge graph construction system for K-12 education","authors":"Penghe Chen, Yu Lu, V. Zheng, Xiyang Chen, Xiaoqing Li","doi":"10.1145/3231644.3231698","DOIUrl":"https://doi.org/10.1145/3231644.3231698","url":null,"abstract":"Motivated by the pressing need of educational applications with knowledge graph, we develop a system, called K12EduKG, to automatically construct knowledge graphs for K-12 educational subjects. Leveraging on heterogeneous domain-specific educational data, K12EduKG extracts educational concepts and identifies implicit relations with high educational significance. More specifically, it adopts named entity recognition (NER) techniques on educational data like curriculum standards to extract educational concepts, and employs data mining techniques to identify the cognitive prerequisite relations between educational concepts. In this paper, we present details of K12EduKG and demonstrate it with a knowledge graph constructed for the subject of mathematics.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"156 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75735256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we consider the role of sharing evidence online in work in progress to develop a new teaching framework for distance and part-time students of The Open University. The work reported here looks at the motivation for applying evidence and how it can act to support the development of the framework, rather than the framework itself. The approach described is adapted from previous research projects, and focuses on how evidence from internal and external scholarship is gathered and refined through an Evidence Hub that is shared online and open to all in the University. An aspect of the framework (offering greater continuity of study) is selected to show how the methodology applies in practice. In conclusion we highlight the value in adopting evidence-based approaches to support change processes and how sharing collective knowledge can influence decision-making.
{"title":"Transformative approaches in distance online education: aligning evidence to influence the design of teaching at scale","authors":"P. McAndrew, S. Anastopoulou, E. Scanlon","doi":"10.1145/3231644.3232261","DOIUrl":"https://doi.org/10.1145/3231644.3232261","url":null,"abstract":"In this paper we consider the role of sharing evidence online in work in progress to develop a new teaching framework for distance and part-time students of The Open University. The work reported here looks at the motivation for applying evidence and how it can act to support the development of the framework, rather than the framework itself. The approach described is adapted from previous research projects, and focuses on how evidence from internal and external scholarship is gathered and refined through an Evidence Hub that is shared online and open to all in the University. An aspect of the framework (offering greater continuity of study) is selected to show how the methodology applies in practice. In conclusion we highlight the value in adopting evidence-based approaches to support change processes and how sharing collective knowledge can influence decision-making.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82023767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Empirically supported multimedia learning (MML) principles [1] suggest effective ways to design instruction, generally for elements on the order of a graphic or an activity. We examined whether the positive impact of MML could be detected in larger instructional units from a MOOC. We coded instructional design (ID) features corresponding to MML principles, mapped quiz items to these features and their use by MOOC participants, and attempted to predict quiz performance. We found that instructional features related to MML, namely practice problems with high-quality examples and text that is concisely written, were positively predictive. We argue it is possible to predict quiz item performance from features of the instructional materials and suggest ways to extend this method to additional aspects of the ID.
{"title":"Multimedia learning principles at scale predict quiz performance","authors":"Anita B. Delahay, M. Lovett","doi":"10.1145/3231644.3231694","DOIUrl":"https://doi.org/10.1145/3231644.3231694","url":null,"abstract":"Empirically supported multimedia learning (MML) principles [1] suggest effective ways to design instruction, generally for elements on the order of a graphic or an activity. We examined whether the positive impact of MML could be detected in larger instructional units from a MOOC. We coded instructional design (ID) features corresponding to MML principles, mapped quiz items to these features and their use by MOOC participants, and attempted to predict quiz performance. We found that instructional features related to MML, namely practice problems with high-quality examples and text that is concisely written, were positively predictive. We argue it is possible to predict quiz item performance from features of the instructional materials and suggest ways to extend this method to additional aspects of the ID.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72707773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online learners sometimes prefer to download course content rather than view it on a course website. These students often miss out on interactive content. Knowing who downloads course materials, and why, can help course creators design courses that fit the needs of their students. In this paper we explore downloading behavior by looking at lecture videos in three online classes. We found that the number of days since a video was posted had the strongest relationship with downloading, and non-technical considerations, such as typical classroom size in a student's home country, matter more than technical issues, such as internet speed. Our findings suggest that more materials will be downloaded when a course will be available for limited time, students are less familiar with the language of instruction, students are used to classrooms with a high student-teacher ratio, or a student's internet speed is slow. Possible reasons for these relationships are discussed.
{"title":"Who downloads online content and why?","authors":"Katherine A. Brady, G. Narasimham, D. Fisher","doi":"10.1145/3231644.3231699","DOIUrl":"https://doi.org/10.1145/3231644.3231699","url":null,"abstract":"Online learners sometimes prefer to download course content rather than view it on a course website. These students often miss out on interactive content. Knowing who downloads course materials, and why, can help course creators design courses that fit the needs of their students. In this paper we explore downloading behavior by looking at lecture videos in three online classes. We found that the number of days since a video was posted had the strongest relationship with downloading, and non-technical considerations, such as typical classroom size in a student's home country, matter more than technical issues, such as internet speed. Our findings suggest that more materials will be downloaded when a course will be available for limited time, students are less familiar with the language of instruction, students are used to classrooms with a high student-teacher ratio, or a student's internet speed is slow. Possible reasons for these relationships are discussed.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75216585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Representation and prediction of student navigational pathways, typically based on neural network (NN) methods, have seen their potential of improving instruction and learning under insufficient human knowledge about learner behavior. However, they are prominently studied in MOOCs and less probed within more institutionalized higher education scenarios. This work extends such research to the context of online college courses. Comparing student navigational sequences through course pages to documents in natural language processing, we apply a skip-gram model to learn vector embedding of course pages, and visualize the learnt vectors to understand the extent to which students' learning pathways align with pre-designed course structure. We find that students who get different letter grades in the end exhibit different levels of adherence to designed sequence. Next, we fit the embedded sequences into a long short-term memory architecture and test its ability to predict next page that a student visits given her prior sequence. The highest accuracy reaches 50.8% and largely outperforms the frequency-based baseline of 41.3%. These results show that neural network methods have the potential to help instructors understand students' learning behaviors and facilitate automated instructional support.
{"title":"Representing and predicting student navigational pathways in online college courses","authors":"Renzhe Yu, Daokun Jiang, M. Warschauer","doi":"10.1145/3231644.3231702","DOIUrl":"https://doi.org/10.1145/3231644.3231702","url":null,"abstract":"Representation and prediction of student navigational pathways, typically based on neural network (NN) methods, have seen their potential of improving instruction and learning under insufficient human knowledge about learner behavior. However, they are prominently studied in MOOCs and less probed within more institutionalized higher education scenarios. This work extends such research to the context of online college courses. Comparing student navigational sequences through course pages to documents in natural language processing, we apply a skip-gram model to learn vector embedding of course pages, and visualize the learnt vectors to understand the extent to which students' learning pathways align with pre-designed course structure. We find that students who get different letter grades in the end exhibit different levels of adherence to designed sequence. Next, we fit the embedded sequences into a long short-term memory architecture and test its ability to predict next page that a student visits given her prior sequence. The highest accuracy reaches 50.8% and largely outperforms the frequency-based baseline of 41.3%. These results show that neural network methods have the potential to help instructors understand students' learning behaviors and facilitate automated instructional support.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74718150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Examining the interaction between content knowledge, inquiry proficiency, and writing proficiency is central to understanding the relative contribution of each proficiency on students' written communication about their science inquiry. Previous studies, however, have only analyzed one of these primary types of knowledge/proficiencies (i.e. content knowledge, inquiry proficiency, and writing proficiency) at a time. This study investigated the extent to which these proficiencies predicted students' written claims, evidence for their claims, and reasoning linking their claims to the evidence. Results showed that all three types of proficiencies significantly predicted students' claims, but only writing proficiency significantly predicted performance on evidence and reasoning statements. These findings indicate the challenges students face when constructing claim, evidence, and reasoning statements, and can inform scaffolding to support these challenges.
{"title":"The relationship between scientific explanations and the proficiencies of content, inquiry, and writing","authors":"Haiying Li, J. Gobert, Rachel Dickler","doi":"10.1145/3231644.3231660","DOIUrl":"https://doi.org/10.1145/3231644.3231660","url":null,"abstract":"Examining the interaction between content knowledge, inquiry proficiency, and writing proficiency is central to understanding the relative contribution of each proficiency on students' written communication about their science inquiry. Previous studies, however, have only analyzed one of these primary types of knowledge/proficiencies (i.e. content knowledge, inquiry proficiency, and writing proficiency) at a time. This study investigated the extent to which these proficiencies predicted students' written claims, evidence for their claims, and reasoning linking their claims to the evidence. Results showed that all three types of proficiencies significantly predicted students' claims, but only writing proficiency significantly predicted performance on evidence and reasoning statements. These findings indicate the challenges students face when constructing claim, evidence, and reasoning statements, and can inform scaffolding to support these challenges.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"124 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75810235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper investigates information overload (IO) in large online courses by developing an Agent-based Model (ABM) of student interaction in a computer-supported collaborative learning (CSCL) environment. Student surveys provided ABM model parameters, and experimental results suggest unique visitor count to be a superior metric than user activity level for IO detection. ABM of synchronous/asynchronous platforms demonstrates how additional channels can be introduced to effectively combat IO. As work in progress, we look forward to validating model recommendations with activity data in online classrooms.
{"title":"Information overload and online collaborative learning: insights from agent-based modeling","authors":"Shimin Zhang","doi":"10.1145/3231644.3231701","DOIUrl":"https://doi.org/10.1145/3231644.3231701","url":null,"abstract":"This paper investigates information overload (IO) in large online courses by developing an Agent-based Model (ABM) of student interaction in a computer-supported collaborative learning (CSCL) environment. Student surveys provided ABM model parameters, and experimental results suggest unique visitor count to be a superior metric than user activity level for IO detection. ABM of synchronous/asynchronous platforms demonstrates how additional channels can be introduced to effectively combat IO. As work in progress, we look forward to validating model recommendations with activity data in online classrooms.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"54 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75166826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Rosen, I. Rushkin, Rob Rubin, Liberty Munson, Andrew M. Ang, G. Weber, Glenn Lopez, D. Tingley
We report an experimental implementation of adaptive learning functionality in a self-paced Microsoft MOOC (massive open online course) on edX. In a personalized adaptive system, the learner's progress toward clearly defined goals is continually assessed, the assessment occurs when a student is ready to demonstrate competency, and supporting materials are tailored to the needs of each learner. Despite the promise of adaptive personalized learning, there is a lack of evidence-based instructional design, transparency in many of the models and algorithms used to provide adaptive technology or a framework for rapid experimentation with different models. ALOSI (Adaptive Learning Open Source Initiative) provides open source adaptive learning technology and a common framework to measure learning gains and learner behavior. This study explored the effects of two different strategies for adaptive learning and assessment: Learners were randomly assigned to three groups. In the first adaptive group ALOSI prioritized a strategy of remediation - serving learners items on topics with the least evidence of mastery; in the second adaptive group ALOSI prioritized a strategy of continuity - that is learners would be more likely served items on similar topic in a sequence until mastery is demonstrated. The control group followed the pathways of the course as set out by the instructional designer, with no adaptive algorithms. We found that the implemented adaptivity in assessment, with emphasis on remediation is associated with a substantial increase in learning gains, while producing no big effect on the drop-out. Further research is needed to confirm these findings and explore additional possible effects and implications to course design.
{"title":"The effects of adaptive learning in a massive open online course on learners' skill development","authors":"Y. Rosen, I. Rushkin, Rob Rubin, Liberty Munson, Andrew M. Ang, G. Weber, Glenn Lopez, D. Tingley","doi":"10.1145/3231644.3231651","DOIUrl":"https://doi.org/10.1145/3231644.3231651","url":null,"abstract":"We report an experimental implementation of adaptive learning functionality in a self-paced Microsoft MOOC (massive open online course) on edX. In a personalized adaptive system, the learner's progress toward clearly defined goals is continually assessed, the assessment occurs when a student is ready to demonstrate competency, and supporting materials are tailored to the needs of each learner. Despite the promise of adaptive personalized learning, there is a lack of evidence-based instructional design, transparency in many of the models and algorithms used to provide adaptive technology or a framework for rapid experimentation with different models. ALOSI (Adaptive Learning Open Source Initiative) provides open source adaptive learning technology and a common framework to measure learning gains and learner behavior. This study explored the effects of two different strategies for adaptive learning and assessment: Learners were randomly assigned to three groups. In the first adaptive group ALOSI prioritized a strategy of remediation - serving learners items on topics with the least evidence of mastery; in the second adaptive group ALOSI prioritized a strategy of continuity - that is learners would be more likely served items on similar topic in a sequence until mastery is demonstrated. The control group followed the pathways of the course as set out by the instructional designer, with no adaptive algorithms. We found that the implemented adaptivity in assessment, with emphasis on remediation is associated with a substantial increase in learning gains, while producing no big effect on the drop-out. Further research is needed to confirm these findings and explore additional possible effects and implications to course design.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85493414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prior research has examined the use of Social Question and Answer (Q&A) websites for answer and help seeking. However, the potential for these websites to support domain learning has not yet been realized. Helping users write effective answers can be beneficial for subject area learning for both answerers and the recipients of answers. In this study, we examine the utility of crowdsourced, criteria-based feedback for answerers on a student-centered Q&A website, Brainly.com. In an experiment with 55 users, we compared perceptions of the current rating system against two feedback designs with explicit criteria (Appropriate, Understandable, and Generalizable). Contrary to our hypotheses, answerers disagreed with and rejected the criteria-based feedback. Although the criteria aligned with answerers' goals, and crowdsourced ratings were found to be objectively accurate, the norms and expectations for answers on Brainly conflicted with our design. We conclude with implications for the design of feedback in social Q&A.
{"title":"Supporting answerers with feedback in social Q&A","authors":"John Frens, Erin Walker, Gary Hsieh","doi":"10.1145/3231644.3231653","DOIUrl":"https://doi.org/10.1145/3231644.3231653","url":null,"abstract":"Prior research has examined the use of Social Question and Answer (Q&A) websites for answer and help seeking. However, the potential for these websites to support domain learning has not yet been realized. Helping users write effective answers can be beneficial for subject area learning for both answerers and the recipients of answers. In this study, we examine the utility of crowdsourced, criteria-based feedback for answerers on a student-centered Q&A website, Brainly.com. In an experiment with 55 users, we compared perceptions of the current rating system against two feedback designs with explicit criteria (Appropriate, Understandable, and Generalizable). Contrary to our hypotheses, answerers disagreed with and rejected the criteria-based feedback. Although the criteria aligned with answerers' goals, and crowdsourced ratings were found to be objectively accurate, the norms and expectations for answers on Brainly conflicted with our design. We conclude with implications for the design of feedback in social Q&A.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"794 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82726223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}