Chen Sun, Fan Xia, Ye Wang, Yan Liu, Weining Qian, Aoying Zhou
This paper proposed a deep learning model for automatic evaluation of academic engagement based on video data analysis. A coding system based on the BROMP standard for behavioral, emotional, and cognitive states was defined to code typical videos in an autonomous learning environment. Then after the key points of human skeletons were extracted from these videos using pose estimation technology, deep learning methods were used to realize the effective recognition and judgment of motion and emotions. Based on this, an analysis and evaluation of learners' learning states was accomplished, and a prototype of academic engagement evaluation system was successfully established eventually.
{"title":"A deep learning model for automatic evaluation of academic engagement","authors":"Chen Sun, Fan Xia, Ye Wang, Yan Liu, Weining Qian, Aoying Zhou","doi":"10.1145/3231644.3231689","DOIUrl":"https://doi.org/10.1145/3231644.3231689","url":null,"abstract":"This paper proposed a deep learning model for automatic evaluation of academic engagement based on video data analysis. A coding system based on the BROMP standard for behavioral, emotional, and cognitive states was defined to code typical videos in an autonomous learning environment. Then after the key points of human skeletons were extracted from these videos using pose estimation technology, deep learning methods were used to realize the effective recognition and judgment of motion and emotions. Based on this, an analysis and evaluation of learners' learning states was accomplished, and a prototype of academic engagement evaluation system was successfully established eventually.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89440768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vivek Singh, B. Padmanabhan, Triparna de Vreede, G. Vreede, Stephanie A. Andel, Paul E. Spector, S. Benfield, Ahmad Aslami
Engagement on online learning platforms is essential for user retention, learning, and performance. However, there is a paucity of research addressing latent engagement measurement using user activities. In this work in progress paper, we present a novel engagement score consisting of three sub-dimensions - cognitive engagement, emotional engagement, and behavioral engagement using a comprehensive set of user activities. We plan to evaluate our score on a large scale online learning platform and compare our score with measurements from a user survey-based engagement scale from the literature.
{"title":"A content engagement score for online learning platforms","authors":"Vivek Singh, B. Padmanabhan, Triparna de Vreede, G. Vreede, Stephanie A. Andel, Paul E. Spector, S. Benfield, Ahmad Aslami","doi":"10.1145/3231644.3231683","DOIUrl":"https://doi.org/10.1145/3231644.3231683","url":null,"abstract":"Engagement on online learning platforms is essential for user retention, learning, and performance. However, there is a paucity of research addressing latent engagement measurement using user activities. In this work in progress paper, we present a novel engagement score consisting of three sub-dimensions - cognitive engagement, emotional engagement, and behavioral engagement using a comprehensive set of user activities. We plan to evaluate our score on a large scale online learning platform and compare our score with measurements from a user survey-based engagement scale from the literature.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89739568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Bassen, I. Howley, Ethan Fast, John C. Mitchell, Candace Thille
Learning analytics systems have the potential to bring enormous value to online education. Unfortunately, many instructors and platforms do not adequately leverage learning analytics in their courses today. In this paper, we report on the value of these systems from the perspective of course instructors. We study these ideas through OARS, a modular and real-time learning analytics system that we deployed across more than ten online courses with tens of thousands of learners. We leverage this system as a starting point for semi-structured interviews with a diverse set of instructors. Our study suggests new design goals for learning analytics systems, the importance of real-time analytics to many instructors, and the value of flexibility in data selection and aggregation for an instructor when working with an analytics system.
{"title":"OARS","authors":"J. Bassen, I. Howley, Ethan Fast, John C. Mitchell, Candace Thille","doi":"10.1145/3231644.3231669","DOIUrl":"https://doi.org/10.1145/3231644.3231669","url":null,"abstract":"Learning analytics systems have the potential to bring enormous value to online education. Unfortunately, many instructors and platforms do not adequately leverage learning analytics in their courses today. In this paper, we report on the value of these systems from the perspective of course instructors. We study these ideas through OARS, a modular and real-time learning analytics system that we deployed across more than ten online courses with tens of thousands of learners. We leverage this system as a starting point for semi-structured interviews with a diverse set of instructors. Our study suggests new design goals for learning analytics systems, the importance of real-time analytics to many instructors, and the value of flexibility in data selection and aggregation for an instructor when working with an analytics system.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84698494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We survey all four years of papers published so far at the Learning at Scale conference in order to reflect on the major research areas that have been investigated and to chart possible directions for future study. We classified all 69 full papers so far into three categories: Systems for Learning at Scale, Interactions with Sociotechnical Systems, and Understanding Online Students. Systems papers presented technologies that varied by how much they amplify human effort (e.g., one-to-one, one-to-many, many-to-many). Interaction papers studied both individual and group interactions with learning technologies. Finally, student-centric study papers focused on modeling knowledge and on promoting global access and equity. We conclude by charting future research directions related to topics such as going beyond the MOOC hype cycle, axes of scale for systems, more immersive course experiences, learning on mobile devices, diversity in student personas, students as co-creators, and fostering better social connections amongst students.
{"title":"Students, systems, and interactions: synthesizing the first four years of learning@scale and charting the future","authors":"Sean Kross, Philip J. Guo","doi":"10.1145/3231644.3231662","DOIUrl":"https://doi.org/10.1145/3231644.3231662","url":null,"abstract":"We survey all four years of papers published so far at the Learning at Scale conference in order to reflect on the major research areas that have been investigated and to chart possible directions for future study. We classified all 69 full papers so far into three categories: Systems for Learning at Scale, Interactions with Sociotechnical Systems, and Understanding Online Students. Systems papers presented technologies that varied by how much they amplify human effort (e.g., one-to-one, one-to-many, many-to-many). Interaction papers studied both individual and group interactions with learning technologies. Finally, student-centric study papers focused on modeling knowledge and on promoting global access and equity. We conclude by charting future research directions related to topics such as going beyond the MOOC hype cycle, axes of scale for systems, more immersive course experiences, learning on mobile devices, diversity in student personas, students as co-creators, and fostering better social connections amongst students.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89105949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present our implementation of a software system that facilitates teachers to create preview and review teaching materials before and after class, as well as enhance interactions between teachers and students for in-class activities. The software system is widely used in China's colleges and universities from 2016, covering more the 3 million teacher/student users. We plan to demonstrate the tool by presenting how it works in a teaching scenario and offering visitors the opportunity to interact with each other.
{"title":"Rain classroom: a tool for blended learning with MOOCs","authors":"Shuaiguo Wang, Youjie Chen","doi":"10.1145/3231644.3231685","DOIUrl":"https://doi.org/10.1145/3231644.3231685","url":null,"abstract":"We present our implementation of a software system that facilitates teachers to create preview and review teaching materials before and after class, as well as enhance interactions between teachers and students for in-class activities. The software system is widely used in China's colleges and universities from 2016, covering more the 3 million teacher/student users. We plan to demonstrate the tool by presenting how it works in a teaching scenario and offering visitors the opportunity to interact with each other.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78234389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The global reach of online experiments and their wide adoption in fields ranging from political science to computer science poses an underexplored opportunity for learning at scale: the possibility of participants learning about the research to which they contribute data. We conducted three experiments on Amazon's Mechanical Turk to evaluate whether participants of paid online experiments are interested in learning about research, what information they find most interesting, and whether providing them with such information actually leads to learning gains. Our findings show that 40% of our participants on Mechanical Turk actively sought out post-experiment learning opportunities despite having already received their financial compensation. Participants expressed high interest in a range of research topics, including previous research and experimental design. Finally, we find that participants comprehend and accurately recall facts from post-experiment learning opportunities. Our findings suggest that Mechanical Turk can be a valuable platform for learning at scale and scientific outreach.
{"title":"The potential for scientific outreach and learning in mechanical turk experiments","authors":"Eunice Jun, Morelle S. Arian, Katharina Reinecke","doi":"10.1145/3231644.3231666","DOIUrl":"https://doi.org/10.1145/3231644.3231666","url":null,"abstract":"The global reach of online experiments and their wide adoption in fields ranging from political science to computer science poses an underexplored opportunity for learning at scale: the possibility of participants learning about the research to which they contribute data. We conducted three experiments on Amazon's Mechanical Turk to evaluate whether participants of paid online experiments are interested in learning about research, what information they find most interesting, and whether providing them with such information actually leads to learning gains. Our findings show that 40% of our participants on Mechanical Turk actively sought out post-experiment learning opportunities despite having already received their financial compensation. Participants expressed high interest in a range of research topics, including previous research and experimental design. Finally, we find that participants comprehend and accurately recall facts from post-experiment learning opportunities. Our findings suggest that Mechanical Turk can be a valuable platform for learning at scale and scientific outreach.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75071699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Love them or hate them, videos are a pervasive format for delivering online education at scale. They are especially popular for computer programming tutorials since videos convey expert narration alongside the dynamic effects of editing and running code. However, these screencast videos simply consist of raw pixels, so there is no way to interact with the code embedded inside of them. To expand the design space of learner interactions with programming videos, we developed Codemotion, a computer vision algorithm that automatically extracts source code and dynamic edits from existing videos. Codemotion segments a video into regions that likely contain code, performs OCR on those segments, recognizes source code, and merges together related code edits into contiguous intervals. We used Codemotion to build a novel video player and then elicited interaction design ideas from potential users by running an elicitation study with 10 students followed by four participatory design workshops with 12 additional students. Participants collectively generated ideas for 28 kinds of interactions such as inline code editing, code-based skimming, pop-up video search, and in-video coding exercises.
{"title":"Codemotion","authors":"Kandarp Khandwala, Philip J. Guo","doi":"10.1145/3231644.3231652","DOIUrl":"https://doi.org/10.1145/3231644.3231652","url":null,"abstract":"Love them or hate them, videos are a pervasive format for delivering online education at scale. They are especially popular for computer programming tutorials since videos convey expert narration alongside the dynamic effects of editing and running code. However, these screencast videos simply consist of raw pixels, so there is no way to interact with the code embedded inside of them. To expand the design space of learner interactions with programming videos, we developed Codemotion, a computer vision algorithm that automatically extracts source code and dynamic edits from existing videos. Codemotion segments a video into regions that likely contain code, performs OCR on those segments, recognizes source code, and merges together related code edits into contiguous intervals. We used Codemotion to build a novel video player and then elicited interaction design ideas from potential users by running an elicitation study with 10 students followed by four participatory design workshops with 12 additional students. Participants collectively generated ideas for 28 kinds of interactions such as inline code editing, code-based skimming, pop-up video search, and in-video coding exercises.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75624799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jungkook Park, Yeong Hoon Park, Jinhan Kim, Jeongmin Cha, Suin Kim, Alice H. Oh
In programming education, instructors often supplement lectures with active learning experiences by offering programming lab sessions where learners themselves practice writing code. However, widely accessed instructional programming screencasts are not equipped with assessment format that encourages such hands-on programming activities. We introduce Elicast, a screencast tool for recording and viewing programming lectures with embedded programming exercises, to provide hands-on programming experiences in the screen-cast. In Elicast, instructors embed multiple programming exercises while creating a screencast, and learners engage in the exercises by writing code within the screencast, receiving auto-graded results immediately. We conducted an exploratory study of Elicast with five experienced instructors and 63 undergraduate students. We found that instructors structured the lectures into small learning units using embedded exercises as checkpoints. Also, learners more actively engaged in the screencast lectures, checked their understanding of the content through the embedded exercises, and more frequently modified and executed the code during the lectures.
{"title":"Elicast","authors":"Jungkook Park, Yeong Hoon Park, Jinhan Kim, Jeongmin Cha, Suin Kim, Alice H. Oh","doi":"10.1145/3231644.3231657","DOIUrl":"https://doi.org/10.1145/3231644.3231657","url":null,"abstract":"In programming education, instructors often supplement lectures with active learning experiences by offering programming lab sessions where learners themselves practice writing code. However, widely accessed instructional programming screencasts are not equipped with assessment format that encourages such hands-on programming activities. We introduce Elicast, a screencast tool for recording and viewing programming lectures with embedded programming exercises, to provide hands-on programming experiences in the screen-cast. In Elicast, instructors embed multiple programming exercises while creating a screencast, and learners engage in the exercises by writing code within the screencast, receiving auto-graded results immediately. We conducted an exploratory study of Elicast with five experienced instructors and 63 undergraduate students. We found that instructors structured the lectures into small learning units using embedded exercises as checkpoints. Also, learners more actively engaged in the screencast lectures, checked their understanding of the content through the embedded exercises, and more frequently modified and executed the code during the lectures.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80851017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are existing multi-MOOC level dropout prediction research in which many MOOCs' data are involved. This generated good results, but there are two potential problems. On one hand, it is inappropriate to use which week students are in to select training data because courses are with different durations. On the other hand, using all other existing data can be computationally expensive and inapplicable in practice. To solve these problems, we propose a model called WPSS (WPercent and Subset Selection) which combines the course progress normalization parameter wpercent and subset selection. 10 MOOCs offered by The University of Hong Kong are involved and experiments are in the multi-MOOC level. The best performance of WPSS is obtained in neural network when 50% of training data is selected (average AUC of 0.9334). Average AUC is 0.8833 for traditional model without wpercent and subset selection in the same dataset.
{"title":"WPSS","authors":"Yuqian Chai, Chi-Un Lei, Xiao Hu, Yu-Kwong Kwok","doi":"10.1145/3231644.3231687","DOIUrl":"https://doi.org/10.1145/3231644.3231687","url":null,"abstract":"There are existing multi-MOOC level dropout prediction research in which many MOOCs' data are involved. This generated good results, but there are two potential problems. On one hand, it is inappropriate to use which week students are in to select training data because courses are with different durations. On the other hand, using all other existing data can be computationally expensive and inapplicable in practice. To solve these problems, we propose a model called WPSS (WPercent and Subset Selection) which combines the course progress normalization parameter wpercent and subset selection. 10 MOOCs offered by The University of Hong Kong are involved and experiments are in the multi-MOOC level. The best performance of WPSS is obtained in neural network when 50% of training data is selected (average AUC of 0.9334). Average AUC is 0.8833 for traditional model without wpercent and subset selection in the same dataset.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89907333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
UPDATED---10 June 2018. This paper describes the demonstration of The Phoenix Corps, the first graphic novel designed specifically for online learning research. While online learning environments regularly use textbooks and videos, graphic novels have not been as popular for research and instruction. This is mainly due to extremely cumbersome and complicated methods of editing traditionally-made graphic novels to update the instructional content or create alternative versions for A/B testing. In this demonstration, attendees will be able to read through, edit, and analyze data from a live online version of The Phoenix Corps.
{"title":"The phoenix corps: a graphic novel for scalable online learning research","authors":"P. Johanes","doi":"10.1145/3231644.3231707","DOIUrl":"https://doi.org/10.1145/3231644.3231707","url":null,"abstract":"UPDATED---10 June 2018. This paper describes the demonstration of The Phoenix Corps, the first graphic novel designed specifically for online learning research. While online learning environments regularly use textbooks and videos, graphic novels have not been as popular for research and instruction. This is mainly due to extremely cumbersome and complicated methods of editing traditionally-made graphic novels to update the instructional content or create alternative versions for A/B testing. In this demonstration, attendees will be able to read through, edit, and analyze data from a live online version of The Phoenix Corps.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"138 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79309588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}