Binglin Chen, Sushmita Azad, Max Fowler, Matthew West, C. Zilles
Proctoring educational assessments (e.g., quizzes and exams) has a cost, be it in faculty (and/or course staff) time or in money to pay for proctoring services. Previous estimates of the utility of proctoring (generally by estimating the score advantage of taking an exam without proctoring) vary widely and have mostly been implemented using an across subjects experimental designs and sometimes with low statistical power. We investigated the score advantage of unproctored exams versus proctored exams using a within-subjects design for N = 510 students in an on-campus introductory programming course with 5 proctored exams and 4 unproctored exams. We found that students scored 3.32 percentage points higher on questions on unproctored exams than on proctored exams (p < 0.001). More interestingly, however, we discovered that this score advantage on unproctored exams grew steadily as the semester progressed, from around 0 percentage points at the start of semester to around 7 percentage points by the end. As the most obvious explanation for this advantage is cheating, we refer to this behavior as the student population "learning to cheat". The data suggests that both more individuals are cheating and the average benefit of cheating is increasing over the course of the semester. Furthermore, we observed that studying for unproctored exams decreased over the course of the semester while studying for proctored exams stayed constant. Lastly, we estimated the score advantage by question type and found that our long-form programming questions had the highest score advantage on unproctored exams, but there are multiple possible explanations for this finding.
{"title":"Learning to Cheat: Quantifying Changes in Score Advantage of Unproctored Assessments Over Time","authors":"Binglin Chen, Sushmita Azad, Max Fowler, Matthew West, C. Zilles","doi":"10.1145/3386527.3405925","DOIUrl":"https://doi.org/10.1145/3386527.3405925","url":null,"abstract":"Proctoring educational assessments (e.g., quizzes and exams) has a cost, be it in faculty (and/or course staff) time or in money to pay for proctoring services. Previous estimates of the utility of proctoring (generally by estimating the score advantage of taking an exam without proctoring) vary widely and have mostly been implemented using an across subjects experimental designs and sometimes with low statistical power. We investigated the score advantage of unproctored exams versus proctored exams using a within-subjects design for N = 510 students in an on-campus introductory programming course with 5 proctored exams and 4 unproctored exams. We found that students scored 3.32 percentage points higher on questions on unproctored exams than on proctored exams (p < 0.001). More interestingly, however, we discovered that this score advantage on unproctored exams grew steadily as the semester progressed, from around 0 percentage points at the start of semester to around 7 percentage points by the end. As the most obvious explanation for this advantage is cheating, we refer to this behavior as the student population \"learning to cheat\". The data suggests that both more individuals are cheating and the average benefit of cheating is increasing over the course of the semester. Furthermore, we observed that studying for unproctored exams decreased over the course of the semester while studying for proctored exams stayed constant. Lastly, we estimated the score advantage by question type and found that our long-form programming questions had the highest score advantage on unproctored exams, but there are multiple possible explanations for this finding.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76994154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erin Czerwinski, Jim Goodell, R. Sottilare, Ellen Wagner
Scaled learning requires a novel set of practices on the part of professionals developing and delivering systems of scaled learning. IEEE's Industry Connections Industry Consortium for Learning Engineering (ICICLE) defines learning engineering as "a process and practice that applies the learning sciences, using human-centered engineering design methodologies, and data-informed decision-making to support learners and their development." This event will bring together learning engineering experts and other interested conference participants to further define the discipline and strategies to establish learning engineering at scale. It will also serve as a gathering place for attendees with shared interests in learning engineering to build community around the advancement of learning engineering as a professional practice and academic field of study. Interdisciplinary research in the learning, computer and data sciences fields continue to discover techniques for developing increasingly effective technology-mediated learning solutions. However, these applied sciences discoveries have been slow to translate into wide-scale practice. This workshop will bring together conference participants to give input into models for scaling the profession of learning engineering and wide-scale use of learning engineering process and practice models.
{"title":"Learning Engineering @ Scale","authors":"Erin Czerwinski, Jim Goodell, R. Sottilare, Ellen Wagner","doi":"10.1145/3386527.3405934","DOIUrl":"https://doi.org/10.1145/3386527.3405934","url":null,"abstract":"Scaled learning requires a novel set of practices on the part of professionals developing and delivering systems of scaled learning. IEEE's Industry Connections Industry Consortium for Learning Engineering (ICICLE) defines learning engineering as \"a process and practice that applies the learning sciences, using human-centered engineering design methodologies, and data-informed decision-making to support learners and their development.\" This event will bring together learning engineering experts and other interested conference participants to further define the discipline and strategies to establish learning engineering at scale. It will also serve as a gathering place for attendees with shared interests in learning engineering to build community around the advancement of learning engineering as a professional practice and academic field of study. Interdisciplinary research in the learning, computer and data sciences fields continue to discover techniques for developing increasingly effective technology-mediated learning solutions. However, these applied sciences discoveries have been slow to translate into wide-scale practice. This workshop will bring together conference participants to give input into models for scaling the profession of learning engineering and wide-scale use of learning engineering process and practice models.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"79 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87095918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyoungwon Seo, S. Fels, Dongwook Yoon, Ido Roll, Samuel Dodson, Matthew Fong
Video-based learning (VBL) is widespread; however, there are numerous challenges when teaching and learning with video. For instructors, creating effective instructional videos takes considerable time and effort. For students, watching videos can be a passive learning activity. Artificial intelligence (AI) has the potential to improve the VBL experience for students and teachers. This half-day workshop will bring together multi-disciplinary researchers and practitioners to collaboratively envision the future of VBL enhanced by AI. This workshop will be comprised of a group discussion followed by a presentation session. The goal of the workshop is to facilitate the cross-pollination of design ideas and critical assessments of AI approaches to VBL.
{"title":"Artificial Intelligence for Video-based Learning at Scale","authors":"Kyoungwon Seo, S. Fels, Dongwook Yoon, Ido Roll, Samuel Dodson, Matthew Fong","doi":"10.1145/3386527.3405937","DOIUrl":"https://doi.org/10.1145/3386527.3405937","url":null,"abstract":"Video-based learning (VBL) is widespread; however, there are numerous challenges when teaching and learning with video. For instructors, creating effective instructional videos takes considerable time and effort. For students, watching videos can be a passive learning activity. Artificial intelligence (AI) has the potential to improve the VBL experience for students and teachers. This half-day workshop will bring together multi-disciplinary researchers and practitioners to collaboratively envision the future of VBL enhanced by AI. This workshop will be comprised of a group discussion followed by a presentation session. The goal of the workshop is to facilitate the cross-pollination of design ideas and critical assessments of AI approaches to VBL.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91242061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In online programming classes, it is tricky to uphold academic honesty in the assessment process. A common approach, plagiarism detection, is not accurate for novice programmers and ineffective for detecting contract cheaters. We present a new approach, cheating detection with keystroke dynamics in programming classes, and evaluated the approach.
{"title":"Detecting Contract Cheaters in Online Programming Classes with Keystroke Dynamics","authors":"Jeongmin Byun, Jungkook Park, Alice H. Oh","doi":"10.1145/3386527.3406726","DOIUrl":"https://doi.org/10.1145/3386527.3406726","url":null,"abstract":"In online programming classes, it is tricky to uphold academic honesty in the assessment process. A common approach, plagiarism detection, is not accurate for novice programmers and ineffective for detecting contract cheaters. We present a new approach, cheating detection with keystroke dynamics in programming classes, and evaluated the approach.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"409 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82177293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It has been shown in multiple studies that expert-created on-demand assistance, such as hint messages, improves student learning in online learning environments. However, there are also evident that certain types of assistance may be detrimental to student learning. In addition, creating and maintaining on-demand assistance are hard and time-consuming. In 2017-2018 academic year, 132,738 distinct problems were assigned inside ASSISTments, but only 38,194 of those problems had on-demand assistance. In order to take on-demand assistance to scale, we needed a system that is able to gather new on-demand assistance and allows us to test and measure its effectiveness. Thus, we designed and deployed TeacherASSIST inside ASSISTments. TeacherASSIST allowed teachers to create on-demand assistance for any problems as they assigned those problems to their students. TeacherASSIST then redistributed on-demand assistance by one teacher to students outside of their classrooms. We found that teachers inside ASSISTments had created 40,292 new instances of assistance for 25,957 different problems in three years. There were 14 teachers who created more than 1,000 instances of on-demand assistance. We also conducted two large-scale randomized controlled experiments to investigate how on-demand assistance created by one teacher affected students outside of their classes. Students who received on-demand assistance for one problem resulted in significant statistical improvement on the next problem performance. The students' improvement in this experiment confirmed our hypothesis that crowd-sourced on-demand assistance was sufficient in quality to improve student learning, allowing us to take on-demand assistance to scale.
{"title":"Effectiveness of Crowd-Sourcing On-Demand Assistance from Teachers in Online Learning Platforms","authors":"Thanaporn Patikorn, N. Heffernan","doi":"10.1145/3386527.3405912","DOIUrl":"https://doi.org/10.1145/3386527.3405912","url":null,"abstract":"It has been shown in multiple studies that expert-created on-demand assistance, such as hint messages, improves student learning in online learning environments. However, there are also evident that certain types of assistance may be detrimental to student learning. In addition, creating and maintaining on-demand assistance are hard and time-consuming. In 2017-2018 academic year, 132,738 distinct problems were assigned inside ASSISTments, but only 38,194 of those problems had on-demand assistance. In order to take on-demand assistance to scale, we needed a system that is able to gather new on-demand assistance and allows us to test and measure its effectiveness. Thus, we designed and deployed TeacherASSIST inside ASSISTments. TeacherASSIST allowed teachers to create on-demand assistance for any problems as they assigned those problems to their students. TeacherASSIST then redistributed on-demand assistance by one teacher to students outside of their classrooms. We found that teachers inside ASSISTments had created 40,292 new instances of assistance for 25,957 different problems in three years. There were 14 teachers who created more than 1,000 instances of on-demand assistance. We also conducted two large-scale randomized controlled experiments to investigate how on-demand assistance created by one teacher affected students outside of their classes. Students who received on-demand assistance for one problem resulted in significant statistical improvement on the next problem performance. The students' improvement in this experiment confirmed our hypothesis that crowd-sourced on-demand assistance was sufficient in quality to improve student learning, allowing us to take on-demand assistance to scale.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78837913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As universities increasingly teach at scale, new challenges are introduced and compounded where students are offered greater choice. A key challenge is to maintain an understanding of the student experience within the huge increase in variations in student study path. This understanding is necessary to provide feedback to both faculty and students, and institutionally for the enhancement of quality. This is the first description of one fresh approach to this challenge. Whilst based on the experience within a large distance learning university, the findings are relevant to all institutions working at scale. Moving from a traditional relational structure to a multi-model database makes it possible to quickly design study path queries to explore the richness of available data. We provide an overview of this approach that could be applied by other universities and higher education institutions where data is not being fully utilised.
{"title":"Understanding Student Experience: A Pathways Model","authors":"C. Edwards, Mark Gaved","doi":"10.1145/3386527.3406724","DOIUrl":"https://doi.org/10.1145/3386527.3406724","url":null,"abstract":"As universities increasingly teach at scale, new challenges are introduced and compounded where students are offered greater choice. A key challenge is to maintain an understanding of the student experience within the huge increase in variations in student study path. This understanding is necessary to provide feedback to both faculty and students, and institutionally for the enhancement of quality. This is the first description of one fresh approach to this challenge. Whilst based on the experience within a large distance learning university, the findings are relevant to all institutions working at scale. Moving from a traditional relational structure to a multi-model database makes it possible to quickly design study path queries to explore the richness of available data. We provide an overview of this approach that could be applied by other universities and higher education institutions where data is not being fully utilised.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78610786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Scanlon, C. Herodotou, Mike Sharples, Kevin McLeod
This paper reviews the current developments in the use of nQuire (www.nquire.org.uk), an Open University platform supporting engagement of members of the public in large-scale interactive surveys and science investigations. The platform is designed to continue a series of mass online science investigations from BBC Lab UK linked to broadcast TV and radio programmes, alongside the citizen-led inquiries. This paper reports on progress with the development of the platform and its use in a variety of contexts
{"title":"nQuire","authors":"E. Scanlon, C. Herodotou, Mike Sharples, Kevin McLeod","doi":"10.1145/3386527.3406722","DOIUrl":"https://doi.org/10.1145/3386527.3406722","url":null,"abstract":"This paper reviews the current developments in the use of nQuire (www.nquire.org.uk), an Open University platform supporting engagement of members of the public in large-scale interactive surveys and science investigations. The platform is designed to continue a series of mass online science investigations from BBC Lab UK linked to broadcast TV and radio programmes, alongside the citizen-led inquiries. This paper reports on progress with the development of the platform and its use in a variety of contexts","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"44 2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78132987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher A. Brooks, René F. Kizilcec, Nia Dowell
Large-scale online learning environments present new opportunities to address the need for greater inclusivity in education. Unlike residential environments, which have physical and logistic constraints (e.g., classroom configurations, sizes, and scheduling) that impede our ability to enact more inclusive pedagogy, online learning environments can be personalized and adapted to individual learner needs. As these environments are completely technology mediated, they offer an almost infinite design space for innovation. Social-scientific research on inclusivity in residential settings provides insight into how we might design for online learning environments, however evidence of efficacious digital implementations of these insights is limited. This workshop aims to advance our understanding of the ways in which adaptivity can be leveraged to buttress inclusivity in STEM learning. Through brief paper presentations and collaborative activities we intend to outline design opportunities in the scaled learning space for creating more inclusive environments.
{"title":"Designing Inclusive Learning Environments","authors":"Christopher A. Brooks, René F. Kizilcec, Nia Dowell","doi":"10.1145/3386527.3405935","DOIUrl":"https://doi.org/10.1145/3386527.3405935","url":null,"abstract":"Large-scale online learning environments present new opportunities to address the need for greater inclusivity in education. Unlike residential environments, which have physical and logistic constraints (e.g., classroom configurations, sizes, and scheduling) that impede our ability to enact more inclusive pedagogy, online learning environments can be personalized and adapted to individual learner needs. As these environments are completely technology mediated, they offer an almost infinite design space for innovation. Social-scientific research on inclusivity in residential settings provides insight into how we might design for online learning environments, however evidence of efficacious digital implementations of these insights is limited. This workshop aims to advance our understanding of the ways in which adaptivity can be leveraged to buttress inclusivity in STEM learning. Through brief paper presentations and collaborative activities we intend to outline design opportunities in the scaled learning space for creating more inclusive environments.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"47 3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89466890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Will Hudgins, M. Lynch, Ash Schmal, Harsh Sikka, Michael Swenson, David A. Joyner
While the literature on learning at scale has largely focused on MOOCs, online degree programs, and AI techniques for supporting scalable learning experiences, informal learning communities have been relatively underrepresented. None-theless, these massive open online learning communities regularly draw far more engaged users than the typical MOOC. Their informal structure, however, makes them significantly more difficult to study. In this work, we take a first step toward attempting to understand these communi-ties specifically from the perspective of scale. Taking a sample of 62 such communities, we develop a tagging sys-tem for understanding the specific features and how they relate to scale. For example, just as a MOOC cannot man-ually grade every assignment, so also an informal learning community cannot approve every contribution; and just as MOOCs therefore employ autograding, informal learning communities employ crowd-sourced moderation or plat-form-driven enforcement. Using these tags, we then select several communities for deeper case studies. We also use these tags to make sense of learning-based subreddits from the popular community site Reddit, which offers an API for programmatic analysis. Based on these techniques, we offer findings about the performance of informal learning communities at scale and issue a call to include these envi-ronments more fully in future research on learning at scale.
{"title":"Informal Learning Communities: The Other Massive Open Online 'C'","authors":"Will Hudgins, M. Lynch, Ash Schmal, Harsh Sikka, Michael Swenson, David A. Joyner","doi":"10.1145/3386527.3405926","DOIUrl":"https://doi.org/10.1145/3386527.3405926","url":null,"abstract":"While the literature on learning at scale has largely focused on MOOCs, online degree programs, and AI techniques for supporting scalable learning experiences, informal learning communities have been relatively underrepresented. None-theless, these massive open online learning communities regularly draw far more engaged users than the typical MOOC. Their informal structure, however, makes them significantly more difficult to study. In this work, we take a first step toward attempting to understand these communi-ties specifically from the perspective of scale. Taking a sample of 62 such communities, we develop a tagging sys-tem for understanding the specific features and how they relate to scale. For example, just as a MOOC cannot man-ually grade every assignment, so also an informal learning community cannot approve every contribution; and just as MOOCs therefore employ autograding, informal learning communities employ crowd-sourced moderation or plat-form-driven enforcement. Using these tags, we then select several communities for deeper case studies. We also use these tags to make sense of learning-based subreddits from the popular community site Reddit, which offers an API for programmatic analysis. Based on these techniques, we offer findings about the performance of informal learning communities at scale and issue a call to include these envi-ronments more fully in future research on learning at scale.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88796111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Content creation has long been regarded as one of the most challenging obstacles to personalized learning. In recent years, however, online platforms have managed to mobilize both audiences and content creators in large numbers, creating new opportunities to revisit the pursuit of personalization at scale. We describe initial results from a real-world implementation of a system for algorithmically generating learning paths at Udemy.com, a two-sided online educational marketplace with over 150,000 courses and over 50 million users. Our initial investigations suggest the potential effectiveness of automated approaches for guiding self-directed learners toward courses that help them achieve their desired learning outcomes.
{"title":"Automated Generation of Learning Paths at Scale","authors":"J. Z. Jia, Gulsen Kutluoglu, Chuong B. Do","doi":"10.1145/3386527.3406754","DOIUrl":"https://doi.org/10.1145/3386527.3406754","url":null,"abstract":"Content creation has long been regarded as one of the most challenging obstacles to personalized learning. In recent years, however, online platforms have managed to mobilize both audiences and content creators in large numbers, creating new opportunities to revisit the pursuit of personalization at scale. We describe initial results from a real-world implementation of a system for algorithmically generating learning paths at Udemy.com, a two-sided online educational marketplace with over 150,000 courses and over 50 million users. Our initial investigations suggest the potential effectiveness of automated approaches for guiding self-directed learners toward courses that help them achieve their desired learning outcomes.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"70 6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83428527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}