Learning Analytics Dashboards (LADs) are predicated on the notion that access to more academic information can help students regulate their academic behaviors, but what is the association between information seeking preferences and help-seeking practices among college students? If given access to more information, what might college students do with it? We investigated these questions in a series of two studies. Study 1 validates a measure of information-seeking preferences---the Motivated Information-Seeking Questionnaire (MISQ)----using a college student sample drawn from across the country (n = 551). In a second study, we used the MISQ to measure college students' (n=210) performance-avoid (i.e., avoiding seeming incompetent in relation to one's peers) and performance-approach (i.e., wishing to outperform one's peers) information seeking preferences, their help-seeking behaviors, and their ability to comprehend line graphs and bar graphs---two common graphs types for LADs. Results point to a negative relationship between graph comprehension and help-seeking strategies, such as attending office hours, emailing one's professor for help, or visiting a study center---even after controlling for academic performance and demographic characteristics. This suggests that students more capable of readings graphs might not seek help when needed. Further results suggest a positive relationship between performance-approach information-seeking preferences, and how often students compare themselves to their peers. This study contributes to our understanding of the motivational implications of academic data visualizations in academic settings, and increases our knowledge of the way students interpret visualizations. It uncovers tensions between what students want to see, versus what it might be more motivationally appropriate for them to see. Importantly, the MISQ and graph comprehension measure can be used in future studies to better understand the role of students' information seeking tendencies with regard to their interpretation of various kinds of feedback present in LADs.
{"title":"Motivated Information Seeking and Graph Comprehension Among College Students","authors":"Stephen J. Aguilar, Clare Baek","doi":"10.1145/3303772.3303805","DOIUrl":"https://doi.org/10.1145/3303772.3303805","url":null,"abstract":"Learning Analytics Dashboards (LADs) are predicated on the notion that access to more academic information can help students regulate their academic behaviors, but what is the association between information seeking preferences and help-seeking practices among college students? If given access to more information, what might college students do with it? We investigated these questions in a series of two studies. Study 1 validates a measure of information-seeking preferences---the Motivated Information-Seeking Questionnaire (MISQ)----using a college student sample drawn from across the country (n = 551). In a second study, we used the MISQ to measure college students' (n=210) performance-avoid (i.e., avoiding seeming incompetent in relation to one's peers) and performance-approach (i.e., wishing to outperform one's peers) information seeking preferences, their help-seeking behaviors, and their ability to comprehend line graphs and bar graphs---two common graphs types for LADs. Results point to a negative relationship between graph comprehension and help-seeking strategies, such as attending office hours, emailing one's professor for help, or visiting a study center---even after controlling for academic performance and demographic characteristics. This suggests that students more capable of readings graphs might not seek help when needed. Further results suggest a positive relationship between performance-approach information-seeking preferences, and how often students compare themselves to their peers. This study contributes to our understanding of the motivational implications of academic data visualizations in academic settings, and increases our knowledge of the way students interpret visualizations. It uncovers tensions between what students want to see, versus what it might be more motivationally appropriate for them to see. Importantly, the MISQ and graph comprehension measure can be used in future studies to better understand the role of students' information seeking tendencies with regard to their interpretation of various kinds of feedback present in LADs.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125080470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a new approach to translate learner data into self-regulated learning support. Learning phases in blended classrooms place unique requirements on students' self-regulated learning (SRL). Learning path graphs merge moment-by-moment learning curves and learning phase data to understand student' SRL support needs. Results indicate 4 groups with different SRL support needs. Students in the self-regulated learning group are capable of learning without external regulation. In the teacher regulation group students need initial teacher regulation but rely on SRL thereafter. Students in the system regulation group require teacher and system regulation to learn. Finally, the advanced system support group is in need of support beyond the current level of system regulation. Based on these insights, the application of personalized dashboards and hybrid human-system regulation is further specified.
{"title":"Towards Hybrid Human-System Regulation: Understanding Children' SRL Support Needs in Blended Classrooms","authors":"I. Molenaar, A. Horvers, R. Baker","doi":"10.1145/3303772.3303780","DOIUrl":"https://doi.org/10.1145/3303772.3303780","url":null,"abstract":"This paper proposes a new approach to translate learner data into self-regulated learning support. Learning phases in blended classrooms place unique requirements on students' self-regulated learning (SRL). Learning path graphs merge moment-by-moment learning curves and learning phase data to understand student' SRL support needs. Results indicate 4 groups with different SRL support needs. Students in the self-regulated learning group are capable of learning without external regulation. In the teacher regulation group students need initial teacher regulation but rely on SRL thereafter. Students in the system regulation group require teacher and system regulation to learn. Finally, the advanced system support group is in need of support beyond the current level of system regulation. Based on these insights, the application of personalized dashboards and hybrid human-system regulation is further specified.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"774 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123281910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benjamin A. Motz, Joshua Quick, Noah L. Schroeder, Jordon Zook, Matt Gunkel
Learning management system (LMS) web logs provide granular, near-real-time records of student behavior as learners interact with online course materials in digital learning environments. However, it remains unclear whether LMS activity indeed reflects behavioral properties of student engagement, and it also remains unclear how to deal with variability in LMS usage across a diversity of courses. In this study, we evaluate whether instructors' subjective ratings of their students' engagement are related to features of LMS activity for 9,021 students enrolled in 473 for-credit courses. We find that estimators derived from LMS web logs are closely related to instructor ratings of engagement, however, we also observe that there is not a single generic relationship between activity and engagement, and what constitutes the behavioral components of "engagement" will be contingent on course structure. However, for many of these courses, modeled engagement scores are comparable to instructors' ratings in their sensitivity for predicting academic performance. As long as they are tuned to the differences between courses, activity indices from LMS web logs can provide a valid and useful proxy measure of student engagement.
{"title":"The validity and utility of activity logs as a measure of student engagement","authors":"Benjamin A. Motz, Joshua Quick, Noah L. Schroeder, Jordon Zook, Matt Gunkel","doi":"10.1145/3303772.3303789","DOIUrl":"https://doi.org/10.1145/3303772.3303789","url":null,"abstract":"Learning management system (LMS) web logs provide granular, near-real-time records of student behavior as learners interact with online course materials in digital learning environments. However, it remains unclear whether LMS activity indeed reflects behavioral properties of student engagement, and it also remains unclear how to deal with variability in LMS usage across a diversity of courses. In this study, we evaluate whether instructors' subjective ratings of their students' engagement are related to features of LMS activity for 9,021 students enrolled in 473 for-credit courses. We find that estimators derived from LMS web logs are closely related to instructor ratings of engagement, however, we also observe that there is not a single generic relationship between activity and engagement, and what constitutes the behavioral components of \"engagement\" will be contingent on course structure. However, for many of these courses, modeled engagement scores are comparable to instructors' ratings in their sensitivity for predicting academic performance. As long as they are tuned to the differences between courses, activity indices from LMS web logs can provide a valid and useful proxy measure of student engagement.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123431545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Fincham, A. Whitelock-Wainwright, Vitomir Kovanovíc, Srécko Joksimovíc, J. V. Staalduinen, D. Gašević
Student engagement is often considered an overarching construct in educational research and practice. Though frequently employed in the learning analytics literature, engagement has been subjected to a variety of interpretations and there is little consensus regarding the very definition of the construct. This raises grave concerns with regards to construct validity: namely, do these varied metrics measure the same thing? To address such concerns, this paper proposes, quantifies, and validates a model of engagement which is both grounded in the theoretical literature and described by common metrics drawn from the field of learning analytics. To identify a latent variable structure in our data we used exploratory factor analysis and validated the derived model on a separate sub-sample of our data using confirmatory factor analysis. To analyze the associations between our latent variables and student outcomes, a structural equation model was fitted, and the validity of this model across different course settings was assessed using MIMIC modeling. Across different domains, the broad consistency of our model with the theoretical literature suggest a mechanism that may be used to inform both interventions and course design.
{"title":"Counting Clicks is Not Enough: Validating a Theorized Model of Engagement in Learning Analytics","authors":"E. Fincham, A. Whitelock-Wainwright, Vitomir Kovanovíc, Srécko Joksimovíc, J. V. Staalduinen, D. Gašević","doi":"10.1145/3303772.3303775","DOIUrl":"https://doi.org/10.1145/3303772.3303775","url":null,"abstract":"Student engagement is often considered an overarching construct in educational research and practice. Though frequently employed in the learning analytics literature, engagement has been subjected to a variety of interpretations and there is little consensus regarding the very definition of the construct. This raises grave concerns with regards to construct validity: namely, do these varied metrics measure the same thing? To address such concerns, this paper proposes, quantifies, and validates a model of engagement which is both grounded in the theoretical literature and described by common metrics drawn from the field of learning analytics. To identify a latent variable structure in our data we used exploratory factor analysis and validated the derived model on a separate sub-sample of our data using confirmatory factor analysis. To analyze the associations between our latent variables and student outcomes, a structural equation model was fitted, and the validity of this model across different course settings was assessed using MIMIC modeling. Across different domains, the broad consistency of our model with the theoretical literature suggest a mechanism that may be used to inform both interventions and course design.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"216 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123778061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The widespread use of online discussion forums in educational settings provides a rich source of data for researchers interested in how collaboration and interaction can foster effective learning. Such online behaviour can be understood through the Community of Inquiry framework, and the cognitive presence construct in particular can be used to characterise the depth of a student's critical engagement with course material. Automated methods have been developed to support this task, but many studies used small data sets, and there have been few replication studies. In this work, we present findings related to the robustness and generalisability of automated classification methods for detecting cognitive presence in discussion forum transcripts. We closely examined one published state-of-the-art model, comparing different approaches to managing unbalanced classes in the data. By demonstrating how commonly-used data preprocessing practices can lead to over-optimistic results, we contribute to the development of the field so that the results of automated content analysis can be used with confidence.
{"title":"Analysing discussion forum data: a replication study avoiding data contamination","authors":"Elaine Farrow, Johanna D. Moore, D. Gašević","doi":"10.1145/3303772.3303779","DOIUrl":"https://doi.org/10.1145/3303772.3303779","url":null,"abstract":"The widespread use of online discussion forums in educational settings provides a rich source of data for researchers interested in how collaboration and interaction can foster effective learning. Such online behaviour can be understood through the Community of Inquiry framework, and the cognitive presence construct in particular can be used to characterise the depth of a student's critical engagement with course material. Automated methods have been developed to support this task, but many studies used small data sets, and there have been few replication studies. In this work, we present findings related to the robustness and generalisability of automated classification methods for detecting cognitive presence in discussion forum transcripts. We closely examined one published state-of-the-art model, comparing different approaches to managing unbalanced classes in the data. By demonstrating how commonly-used data preprocessing practices can lead to over-optimistic results, we contribute to the development of the field so that the results of automated content analysis can be used with confidence.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126764683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MOOCs and online courses have notoriously high attrition [1]. One challenge is that it can be difficult to tell if students fail to complete because of disinterest or because of course difficulty. Utilizing a Deep Knowledge Tracing framework, we account for student engagement by including course interaction covariates. With these, we find that we can predict a student's next item response with over 88% accuracy. Using these predictions, targeted interventions can be offered to students and targeted improvements can be made to courses. In particular, this approach would allow for gating of content until a student has reasonable likelihood of succeeding.
{"title":"Deep Knowledge Tracing and Engagement with MOOCs","authors":"Kritphong Mongkhonvanit, K. Kanopka, David Lang","doi":"10.1145/3303772.3303830","DOIUrl":"https://doi.org/10.1145/3303772.3303830","url":null,"abstract":"MOOCs and online courses have notoriously high attrition [1]. One challenge is that it can be difficult to tell if students fail to complete because of disinterest or because of course difficulty. Utilizing a Deep Knowledge Tracing framework, we account for student engagement by including course interaction covariates. With these, we find that we can predict a student's next item response with over 88% accuracy. Using these predictions, targeted interventions can be offered to students and targeted improvements can be made to courses. In particular, this approach would allow for gating of content until a student has reasonable likelihood of succeeding.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115550983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cathlyn Stone, A. Quirk, Margo Gardener, Stephen Hutt, A. Duckworth, S. D’Mello
It is widely acknowledged that the language we use reflects numerous psychological constructs, including our thoughts, feelings, and desires. Can the so called "noncognitive" traits with known links to success, such as growth mindset, leadership ability, and intrinsic motivation, be similarly revealed through language? We investigated this question by analyzing students' 150-word open-ended descriptions of their own extracurricular activities or work experiences included in their college applications. We used the Common Application-National Student Clearinghouse data set, a six-year longitudinal dataset that includes college application data and graduation outcomes for 278,201 U.S. high-school students. We first developed a coding scheme from a stratified sample of 4,000 essays and used it to code seven traits: growth mindset, perseverance, goal orientation, leadership, psychological connection (intrinsic motivation), self-transcendent (prosocial) purpose, and team orientation, along with earned accolades. Then, we used standard classifiers with bag-of-n-grams as features and deep learning techniques (recurrent neural networks) with word embeddings to automate the coding. The models demonstrated convergent validity with the human coding with AUCs ranging from .770 to .925 and correlations ranging from .418 to .734. There was also evidence of discriminant validity in the pattern of inter-correlations (rs between -.206 to .306) for both human- and model-coded traits. Finally, the models demonstrated incremental predictive validity in predicting six-year graduation outcomes net of sociodemographics, intelligence, academic achievement, and institutional graduation rates. We conclude that language provides a lens into noncognitive traits important for college success, which can be captured with automated methods.
{"title":"Language as Thought: Using Natural Language Processing to Model Noncognitive Traits that Predict College Success","authors":"Cathlyn Stone, A. Quirk, Margo Gardener, Stephen Hutt, A. Duckworth, S. D’Mello","doi":"10.1145/3303772.3303801","DOIUrl":"https://doi.org/10.1145/3303772.3303801","url":null,"abstract":"It is widely acknowledged that the language we use reflects numerous psychological constructs, including our thoughts, feelings, and desires. Can the so called \"noncognitive\" traits with known links to success, such as growth mindset, leadership ability, and intrinsic motivation, be similarly revealed through language? We investigated this question by analyzing students' 150-word open-ended descriptions of their own extracurricular activities or work experiences included in their college applications. We used the Common Application-National Student Clearinghouse data set, a six-year longitudinal dataset that includes college application data and graduation outcomes for 278,201 U.S. high-school students. We first developed a coding scheme from a stratified sample of 4,000 essays and used it to code seven traits: growth mindset, perseverance, goal orientation, leadership, psychological connection (intrinsic motivation), self-transcendent (prosocial) purpose, and team orientation, along with earned accolades. Then, we used standard classifiers with bag-of-n-grams as features and deep learning techniques (recurrent neural networks) with word embeddings to automate the coding. The models demonstrated convergent validity with the human coding with AUCs ranging from .770 to .925 and correlations ranging from .418 to .734. There was also evidence of discriminant validity in the pattern of inter-correlations (rs between -.206 to .306) for both human- and model-coded traits. Finally, the models demonstrated incremental predictive validity in predicting six-year graduation outcomes net of sociodemographics, intelligence, academic achievement, and institutional graduation rates. We conclude that language provides a lens into noncognitive traits important for college success, which can be captured with automated methods.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115994571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sahba Akhavan Niaki, Clint P. George, G. Michailidis, C. Beal
We study the usage of a self-guided online tutoring platform called Algebra Nation, which is widely by middle school and high school students who take the End-of-Course Algebra I exam at the end of the school year. This article aims to study how the platform contributes to increasing students' exam scores by examining users' logs over a three year period. The platform under consideration was used by more than 36,000 students in the first year, to nearly 67,000 by the third year, thus enabling us to examine how usage patterns evolved and influenced students' performance at scale. We first identify which Algebra Nation usage factors in conjunction with math overall preparation and socioeconomic factors contribute to the students' exam performance. Subsequently, we investigate the effect of increased teacher familiarity level with the Algebra Nation on students' scores across different grades through mediation analysis. The results show that the indirect effect of teacher's familiarity with the platform through increasing student's usage dosage is more significant in higher grades.
{"title":"Investigating the Usage Patterns of Algebra Nation Tutoring Platform","authors":"Sahba Akhavan Niaki, Clint P. George, G. Michailidis, C. Beal","doi":"10.1145/3303772.3303788","DOIUrl":"https://doi.org/10.1145/3303772.3303788","url":null,"abstract":"We study the usage of a self-guided online tutoring platform called Algebra Nation, which is widely by middle school and high school students who take the End-of-Course Algebra I exam at the end of the school year. This article aims to study how the platform contributes to increasing students' exam scores by examining users' logs over a three year period. The platform under consideration was used by more than 36,000 students in the first year, to nearly 67,000 by the third year, thus enabling us to examine how usage patterns evolved and influenced students' performance at scale. We first identify which Algebra Nation usage factors in conjunction with math overall preparation and socioeconomic factors contribute to the students' exam performance. Subsequently, we investigate the effect of increased teacher familiarity level with the Algebra Nation on students' scores across different grades through mediation analysis. The results show that the indirect effect of teacher's familiarity with the platform through increasing student's usage dosage is more significant in higher grades.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129615264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Heo, Hyoungjoon Lim, S. Yun, Sungha Ju, Sangyoon Park, R. Lee
Yonsei University in Korea launched an educational innovation project entitled "Data-Driven Smart-Connected Campus Life Service", for which student-related data have been accumulated at university level since spring of 2015, and descriptive, predictive and prescriptive modeling have been conducted to offer innovative education service to students. The dataset covers not only conventional student information, student questionnaire survey, and university administrative data, but also unconventional data sets such as student location data and learning management system (LMS) log data. Based on the datasets, with respect to 4,000+ freshman students at residential college, we conducted preliminary implementation of descriptive and predictive modeling for student achievement, satisfaction, and mental health. The results were overall promising. First, descriptive and predictive modeling of GPA for student achievement presented a list of significant predictive variables from student locations and LMS activities. Second, descriptive modeling of student satisfaction revealed influential variables such as "improvement of creativity" and "ability of cooperation". Third, similar descriptive modeling was applied to students' mental health changes by semesters, and the study uncovered influential factors such as "difficulty with relationship" and "time spent with friends increased' as key determinants of student mental health. Although the educational innovation project is still in its early stages, we have three strategies of the future modelling efforts: They are: (1) step-by-step improvement from descriptive, predictive, to prescriptive modelling; (2) full use of recurring data acquisition; (3) different level of segmentation.
{"title":"Descriptive and Predictive Modeling of Student Achievement, Satisfaction, and Mental Health for Data-Driven Smart Connected Campus Life Service","authors":"J. Heo, Hyoungjoon Lim, S. Yun, Sungha Ju, Sangyoon Park, R. Lee","doi":"10.1145/3303772.3303792","DOIUrl":"https://doi.org/10.1145/3303772.3303792","url":null,"abstract":"Yonsei University in Korea launched an educational innovation project entitled \"Data-Driven Smart-Connected Campus Life Service\", for which student-related data have been accumulated at university level since spring of 2015, and descriptive, predictive and prescriptive modeling have been conducted to offer innovative education service to students. The dataset covers not only conventional student information, student questionnaire survey, and university administrative data, but also unconventional data sets such as student location data and learning management system (LMS) log data. Based on the datasets, with respect to 4,000+ freshman students at residential college, we conducted preliminary implementation of descriptive and predictive modeling for student achievement, satisfaction, and mental health. The results were overall promising. First, descriptive and predictive modeling of GPA for student achievement presented a list of significant predictive variables from student locations and LMS activities. Second, descriptive modeling of student satisfaction revealed influential variables such as \"improvement of creativity\" and \"ability of cooperation\". Third, similar descriptive modeling was applied to students' mental health changes by semesters, and the study uncovered influential factors such as \"difficulty with relationship\" and \"time spent with friends increased' as key determinants of student mental health. Although the educational innovation project is still in its early stages, we have three strategies of the future modelling efforts: They are: (1) step-by-step improvement from descriptive, predictive, to prescriptive modelling; (2) full use of recurring data acquisition; (3) different level of segmentation.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128500267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Learning Analytics (LA) research has demonstrated the potential of LA in detecting and monitoring cognitive-affective parameters and improving student success. But most of it has been applied to online and computerized learning environments whereas physical classrooms have largely remained outside the scope of such research. This paper attempts to bridge that gap by proposing a student feedback model in which they report on the difficult/easy and engaging/boring aspects of their lecture. We outline the pedagogical affordances of an aggregated time-series of such data and discuss it within the context of LA research.
{"title":"DEBE feedback for large lecture classroom analytics","authors":"R. Mitra, Pankaj S. Chavan","doi":"10.1145/3303772.3303821","DOIUrl":"https://doi.org/10.1145/3303772.3303821","url":null,"abstract":"Learning Analytics (LA) research has demonstrated the potential of LA in detecting and monitoring cognitive-affective parameters and improving student success. But most of it has been applied to online and computerized learning environments whereas physical classrooms have largely remained outside the scope of such research. This paper attempts to bridge that gap by proposing a student feedback model in which they report on the difficult/easy and engaging/boring aspects of their lecture. We outline the pedagogical affordances of an aggregated time-series of such data and discuss it within the context of LA research.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128695285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}