Predictive modeling has been a core area of learning analytics research over the past decade, with such models currently deployed in a variety of educational contexts from MOOCs to K-12. However, analyses of the differential effectiveness of these models across demographic, identity, or other groups has been scarce. In this paper, we present a method for evaluating unfairness in predictive student models. We define this in terms of differential accuracy between subgroups, and measure it using a new metric we term the Absolute Between-ROC Area (ABROCA). We demonstrate the proposed method through a gender-based "slicing analysis" using five different models replicated from other works and a dataset of 44 unique MOOCs and over four million learners. Our results demonstrate (1) significant differences in model fairness according to (a) statistical algorithm and (b) feature set used; (2) that the gender imbalance ratio, curricular area, and specific course used for a model all display significant association with the value of the ABROCA statistic; and (3) that there is not evidence of a strict tradeoff between performance and fairness. This work provides a framework for quantifying and understanding how predictive models might inadvertently privilege, or disparately impact, different student subgroups. Furthermore, our results suggest that learning analytics researchers and practitioners can use slicing analysis to improve model fairness without necessarily sacrificing performance.1
{"title":"Evaluating the Fairness of Predictive Student Models Through Slicing Analysis","authors":"Josh Gardner, Christopher A. Brooks, R. Baker","doi":"10.1145/3303772.3303791","DOIUrl":"https://doi.org/10.1145/3303772.3303791","url":null,"abstract":"Predictive modeling has been a core area of learning analytics research over the past decade, with such models currently deployed in a variety of educational contexts from MOOCs to K-12. However, analyses of the differential effectiveness of these models across demographic, identity, or other groups has been scarce. In this paper, we present a method for evaluating unfairness in predictive student models. We define this in terms of differential accuracy between subgroups, and measure it using a new metric we term the Absolute Between-ROC Area (ABROCA). We demonstrate the proposed method through a gender-based \"slicing analysis\" using five different models replicated from other works and a dataset of 44 unique MOOCs and over four million learners. Our results demonstrate (1) significant differences in model fairness according to (a) statistical algorithm and (b) feature set used; (2) that the gender imbalance ratio, curricular area, and specific course used for a model all display significant association with the value of the ABROCA statistic; and (3) that there is not evidence of a strict tradeoff between performance and fairness. This work provides a framework for quantifying and understanding how predictive models might inadvertently privilege, or disparately impact, different student subgroups. Furthermore, our results suggest that learning analytics researchers and practitioners can use slicing analysis to improve model fairness without necessarily sacrificing performance.1","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127238269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The motivation for this paper is derived from the fact that there has been increasing interest among researchers and practitioners in developing technologies that capture, model and analyze learning and teaching experiences that take place beyond computer-based learning environments. In this paper, we review case studies of tools and technologies developed to collect and analyze data in educational settings, quantify learning and teaching processes and support assessment of learning and teaching in an automated fashion. We focus on pipelines that leverage information and data harnessed from physical spaces and/or integrates collected data across physical and digital spaces. Our review reveals a promising field of physical classroom analysis. We describe some trends and suggest potential future directions. Specifically, more research should be geared towards a) deployable and sustainable data collection set-ups in physical learning environments, b) teacher assessment, c) developing feedback and visualization systems and d) promoting inclusivity and generalizability of models across populations.
{"title":"Technologies for automated analysis of co-located, real-life, physical learning spaces: Where are we now?","authors":"Y. H. V. Chua, J. Dauwels, S. Tan","doi":"10.1145/3303772.3303811","DOIUrl":"https://doi.org/10.1145/3303772.3303811","url":null,"abstract":"The motivation for this paper is derived from the fact that there has been increasing interest among researchers and practitioners in developing technologies that capture, model and analyze learning and teaching experiences that take place beyond computer-based learning environments. In this paper, we review case studies of tools and technologies developed to collect and analyze data in educational settings, quantify learning and teaching processes and support assessment of learning and teaching in an automated fashion. We focus on pipelines that leverage information and data harnessed from physical spaces and/or integrates collected data across physical and digital spaces. Our review reveals a promising field of physical classroom analysis. We describe some trends and suggest potential future directions. Specifically, more research should be geared towards a) deployable and sustainable data collection set-ups in physical learning environments, b) teacher assessment, c) developing feedback and visualization systems and d) promoting inclusivity and generalizability of models across populations.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116095288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To scaffold students' investigations of an inquiry-based immersive virtual world for science education without undercutting the affordances an open-ended activity provides, this study explores ways time-stamped log files of groups' actions may enable the automatic generation of formative supports. Groups' logged actions in the virtual world are filtered via principal component analysis to provide a time series trajectory showing the rate of their investigative activities over time. This technique functions well in open-ended environments and examines the entire course of their experience in the virtual world instead of specific subsequences. Groups' trajectories are grouped via k-means clustering to identify different typical pathways taken through the immersive virtual world. These different approaches are then correlated with learning gains across several survey constructs (affective dimensions, ecosystem science content, understanding of causality, and experimental methods) to see how various trends are associated with different outcomes. Differences by teacher and school are explored to see how best to support inclusion and success of a diverse array of learners.
{"title":"Differences in Student Trajectories via Filtered Time Series Analysis in an Immersive Virtual World","authors":"J. Reilly, C. Dede","doi":"10.1145/3303772.3303832","DOIUrl":"https://doi.org/10.1145/3303772.3303832","url":null,"abstract":"To scaffold students' investigations of an inquiry-based immersive virtual world for science education without undercutting the affordances an open-ended activity provides, this study explores ways time-stamped log files of groups' actions may enable the automatic generation of formative supports. Groups' logged actions in the virtual world are filtered via principal component analysis to provide a time series trajectory showing the rate of their investigative activities over time. This technique functions well in open-ended environments and examines the entire course of their experience in the virtual world instead of specific subsequences. Groups' trajectories are grouped via k-means clustering to identify different typical pathways taken through the immersive virtual world. These different approaches are then correlated with learning gains across several survey constructs (affective dimensions, ecosystem science content, understanding of causality, and experimental methods) to see how various trends are associated with different outcomes. Differences by teacher and school are explored to see how best to support inclusion and success of a diverse array of learners.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"216 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123265326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are numerous studies have reported the effectiveness of example-based programming learning. However, less is explored recommending code examples with advanced Machine Learning-based models. In this work, we propose a new method to explore the semantic analytics between programming codes and the annotations. We hypothesize that these semantics analytics will capture mass amount of valuable information that can be used as features to build predictive models. We evaluated the proposed semantic analytics extraction method with multiple deep learning algorithms. Results showed that deep learning models outperformed other models and baseline in most cases. Further analysis indicated that in special cases, the proposed method outperformed deep learning models by restricting false-positive classifications.
{"title":"Exploring Programming Semantic Analytics with Deep Learning Models","authors":"Yihan Lu, I-Han Hsiao","doi":"10.1145/3303772.3303823","DOIUrl":"https://doi.org/10.1145/3303772.3303823","url":null,"abstract":"There are numerous studies have reported the effectiveness of example-based programming learning. However, less is explored recommending code examples with advanced Machine Learning-based models. In this work, we propose a new method to explore the semantic analytics between programming codes and the annotations. We hypothesize that these semantics analytics will capture mass amount of valuable information that can be used as features to build predictive models. We evaluated the proposed semantic analytics extraction method with multiple deep learning algorithms. Results showed that deep learning models outperformed other models and baseline in most cases. Further analysis indicated that in special cases, the proposed method outperformed deep learning models by restricting false-positive classifications.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122689946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenfei Yan, Nia Dowell, Caitlin Holman, Stephen S. Welsh, Heeryung Choi, Christopher A. Brooks
MOOCs have developed into multiple learning design models with a wide range of objectives. Teach-Outs are one such example, aiming to drive meaningful discussions around topics of pressing social urgency without the use of formal assessments. Given this approach, it is crucial to evaluate learners' engagement in the discussion forum to understand their experiences. This paper presents a pilot study that applied unsupervised natural language processing techniques to understand what and how students engage in dialogue in a Teach-Out. We used topic modeling to discover the emerging topics in the discussion forums and evaluated the on-topicness of the discussions (i.e. the degree to which discussions were relevant to the Teach-Out content). We also applied content analysis to investigate the sentiments associated with the discussions. We have taken a step toward extracting structure from students' discussions to understand learning behaviors happen in the discussion forum. This is the first study to analyze discussion forums in a Teach-Out.
{"title":"Exploring Learner Engagement Patterns in Teach-Outs Using Topic, Sentiment and On-topicness to Reflect on Pedagogy","authors":"Wenfei Yan, Nia Dowell, Caitlin Holman, Stephen S. Welsh, Heeryung Choi, Christopher A. Brooks","doi":"10.1145/3303772.3303836","DOIUrl":"https://doi.org/10.1145/3303772.3303836","url":null,"abstract":"MOOCs have developed into multiple learning design models with a wide range of objectives. Teach-Outs are one such example, aiming to drive meaningful discussions around topics of pressing social urgency without the use of formal assessments. Given this approach, it is crucial to evaluate learners' engagement in the discussion forum to understand their experiences. This paper presents a pilot study that applied unsupervised natural language processing techniques to understand what and how students engage in dialogue in a Teach-Out. We used topic modeling to discover the emerging topics in the discussion forums and evaluated the on-topicness of the discussions (i.e. the degree to which discussions were relevant to the Teach-Out content). We also applied content analysis to investigate the sentiments associated with the discussions. We have taken a step toward extracting structure from students' discussions to understand learning behaviors happen in the discussion forum. This is the first study to analyze discussion forums in a Teach-Out.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114228821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Establishing small learning groups in online courses is a possible way to foster collaborative knowledge building in an engaging and effective learning community. To enable group activities it is not enough to design collaborative tasks and to provide collaboration tools for online scenarios. Collaboration in such learning groups is prone to fail or even not to be initiated without explicit guidance. In the target situations, interventions and guiding mechanisms have to scale with a growing number of course participants. To achieve this under privacy constraints, we aim at identifying target indicators for well-functioning group work that do not rely on any kind of information about individual learners.
{"title":"Predicting the Well-functioning of Learning Groups under Privacy Restrictions","authors":"Tobias Hecking, Dorian Doberstein, H. Hoppe","doi":"10.1145/3303772.3303826","DOIUrl":"https://doi.org/10.1145/3303772.3303826","url":null,"abstract":"Establishing small learning groups in online courses is a possible way to foster collaborative knowledge building in an engaging and effective learning community. To enable group activities it is not enough to design collaborative tasks and to provide collaboration tools for online scenarios. Collaboration in such learning groups is prone to fail or even not to be initiated without explicit guidance. In the target situations, interventions and guiding mechanisms have to scale with a growing number of course participants. To achieve this under privacy constraints, we aim at identifying target indicators for well-functioning group work that do not rely on any kind of information about individual learners.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"97 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127999285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several Learning Analytics applications are limited by the cost of generating a computer understandable description of the course domain, what is called an Intelligent Curriculum. The following work contributes a novel approach to (semi-)automatically generate Intelligent Curriculum through ontologies extracted from existing learning materials such as digital books or web content. Through a series of natural language processing steps, the semi-structured information present in existing content is transformed into a concept-graph. This work also evaluates the proposed methodology by applying it to learning content for two different courses and measuring the quality of the extracted ontologies against manually generated ones. The results obtained suggest that the technique can be readily used to provide domain information to other Learning Analytics tools.
{"title":"Semi-Automatic Generation of Intelligent Curricula to Facilitate Learning Analytics","authors":"Angel Fiallos, X. Ochoa","doi":"10.1145/3303772.3303834","DOIUrl":"https://doi.org/10.1145/3303772.3303834","url":null,"abstract":"Several Learning Analytics applications are limited by the cost of generating a computer understandable description of the course domain, what is called an Intelligent Curriculum. The following work contributes a novel approach to (semi-)automatically generate Intelligent Curriculum through ontologies extracted from existing learning materials such as digital books or web content. Through a series of natural language processing steps, the semi-structured information present in existing content is transformed into a concept-graph. This work also evaluates the proposed methodology by applying it to learning content for two different courses and measuring the quality of the extracted ontologies against manually generated ones. The results obtained suggest that the technique can be readily used to provide domain information to other Learning Analytics tools.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130865714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current Learning Analytics (LA) systems are primarily designed with University staff members as the target audience; very few are aimed at students, with almost none being developed with direct student involvement and undertaking a comprehensive evaluation. This paper describes a HEFCE funded project that has employed a variety of methods to engage students in the design, development and evaluation of a student facing LA dashboard. LA was integrated into the delivery of 4 undergraduate modules with 169 student sign-ups. The design of the dashboard uses a novel approach of trying to understand the reasons why students want to study at university and maps their engagement and predicted outcomes to these motivations, with weekly personalised notifications and feedback. Students are also given the choice of how to visualise the data either via a chart-based view or to be represented as themselves. A mixed-methods evaluation has shown that students' feelings of dependability and trust of the underlying analytics and data is variable. However, students were mostly positive about the usability and interface design of the system and almost all students once signed-up did interact with their LA. The majority of students could see how the LA system could support their learning and said that it would influence their behaviour. In some cases, this has had a direct impact on their levels of engagement. The main contribution of this paper is the transparent documentation of a User Centred Design approach that has produced forms of LA representation, recommendation and interaction design that go beyond those used in current similar systems and have been shown to motivate students and impact their learning behaviour.
{"title":"Student Centred Design of a Learning Analytics System","authors":"E. Quincey, C. Briggs, T. Kyriacou, R. Waller","doi":"10.1145/3303772.3303793","DOIUrl":"https://doi.org/10.1145/3303772.3303793","url":null,"abstract":"Current Learning Analytics (LA) systems are primarily designed with University staff members as the target audience; very few are aimed at students, with almost none being developed with direct student involvement and undertaking a comprehensive evaluation. This paper describes a HEFCE funded project that has employed a variety of methods to engage students in the design, development and evaluation of a student facing LA dashboard. LA was integrated into the delivery of 4 undergraduate modules with 169 student sign-ups. The design of the dashboard uses a novel approach of trying to understand the reasons why students want to study at university and maps their engagement and predicted outcomes to these motivations, with weekly personalised notifications and feedback. Students are also given the choice of how to visualise the data either via a chart-based view or to be represented as themselves. A mixed-methods evaluation has shown that students' feelings of dependability and trust of the underlying analytics and data is variable. However, students were mostly positive about the usability and interface design of the system and almost all students once signed-up did interact with their LA. The majority of students could see how the LA system could support their learning and said that it would influence their behaviour. In some cases, this has had a direct impact on their levels of engagement. The main contribution of this paper is the transparent documentation of a User Centred Design approach that has produced forms of LA representation, recommendation and interaction design that go beyond those used in current similar systems and have been shown to motivate students and impact their learning behaviour.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"29 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123875448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Knowledge Tracing (KT) is to trace the knowledge of students as they solve a sequence of problems represented by their related skills. This involves abstract concepts of students' states of knowledge and the interactions between those states and skills. Therefore, a KT model is designed to predict whether students will give correct answers and to describe such abstract concepts. However, existing methods either give relatively low prediction accuracy or fail to explain those concepts intuitively. In this paper, we propose a new model called Knowledge Query Network (KQN) to solve these problems. KQN uses neural networks to encode student learning activities into knowledge state and skill vectors, and models the interactions between the two types of vectors with the dot product. Through this, we introduce a novel concept called probabilistic skill similarity that relates the pairwise cosine and Euclidean distances between skill vectors to the odds ratios of the corresponding skills, which makes KQN interpretable and intuitive. On four public datasets, we have carried out experiments to show the following: 1. KQN outperforms all the existing KT models based on prediction accuracy. 2. The interaction between the knowledge state and skills can be visualized for interpretation. 3. Based on probabilistic skill similarity, a skill domain can be analyzed with clustering using the distances between the skill vectors of KQN. 4. For different values of the vector space dimensionality, KQN consistently exhibits high prediction accuracy and a strong positive correlation between the distance matrices of the skill vectors.
{"title":"Knowledge Query Network for Knowledge Tracing: How Knowledge Interacts with Skills","authors":"Jinseok Lee, D. Yeung","doi":"10.1145/3303772.3303786","DOIUrl":"https://doi.org/10.1145/3303772.3303786","url":null,"abstract":"Knowledge Tracing (KT) is to trace the knowledge of students as they solve a sequence of problems represented by their related skills. This involves abstract concepts of students' states of knowledge and the interactions between those states and skills. Therefore, a KT model is designed to predict whether students will give correct answers and to describe such abstract concepts. However, existing methods either give relatively low prediction accuracy or fail to explain those concepts intuitively. In this paper, we propose a new model called Knowledge Query Network (KQN) to solve these problems. KQN uses neural networks to encode student learning activities into knowledge state and skill vectors, and models the interactions between the two types of vectors with the dot product. Through this, we introduce a novel concept called probabilistic skill similarity that relates the pairwise cosine and Euclidean distances between skill vectors to the odds ratios of the corresponding skills, which makes KQN interpretable and intuitive. On four public datasets, we have carried out experiments to show the following: 1. KQN outperforms all the existing KT models based on prediction accuracy. 2. The interaction between the knowledge state and skills can be visualized for interpretation. 3. Based on probabilistic skill similarity, a skill domain can be analyzed with clustering using the distances between the skill vectors of KQN. 4. For different values of the vector space dimensionality, KQN consistently exhibits high prediction accuracy and a strong positive correlation between the distance matrices of the skill vectors.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128793391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online lectures are an increasingly popular tool for learning, yet research on instructor visibility during an online lecture, and students' environmental settings, has not been well-explored. The current study addresses this gap in the literature by experimentally manipulating online display format and social learning settings to understand their influence on student learning and mind-wandering experiences. Results suggest that instructor visibility within an online lecture does not impact students' MW or retention performance. However, we found some evidence that students' social setting during viewing has an impact on MW (p = .05). Specifically, students who watched the lecture in a classroom with others reported significantly more MW than students who watched the lecture alone. Finally, social setting also moderated the negative relationship between MW and material retention. Our results demonstrate that learning experiences during online lectures can vary based on where, and with whom, the lectures are watched.
{"title":"Where You Are, Not What You See: The Impact of Learning Environment on Mind Wandering and Material Retention","authors":"Trish L. Varao-Sousa, Caitlin Mills, A. Kingstone","doi":"10.1145/3303772.3303824","DOIUrl":"https://doi.org/10.1145/3303772.3303824","url":null,"abstract":"Online lectures are an increasingly popular tool for learning, yet research on instructor visibility during an online lecture, and students' environmental settings, has not been well-explored. The current study addresses this gap in the literature by experimentally manipulating online display format and social learning settings to understand their influence on student learning and mind-wandering experiences. Results suggest that instructor visibility within an online lecture does not impact students' MW or retention performance. However, we found some evidence that students' social setting during viewing has an impact on MW (p = .05). Specifically, students who watched the lecture in a classroom with others reported significantly more MW than students who watched the lecture alone. Finally, social setting also moderated the negative relationship between MW and material retention. Our results demonstrate that learning experiences during online lectures can vary based on where, and with whom, the lectures are watched.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116621672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}