Online educational video engagement prediction based on dynamic graph neural networks

IF 2.5 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS International Journal of Web Information Systems Pub Date : 2023-09-08 DOI:10.1108/ijwis-05-2023-0083
Xiancheng Ou, Yuting Chen, Siwei Zhou, Jiandong Shi
{"title":"Online educational video engagement prediction based on dynamic graph neural networks","authors":"Xiancheng Ou, Yuting Chen, Siwei Zhou, Jiandong Shi","doi":"10.1108/ijwis-05-2023-0083","DOIUrl":null,"url":null,"abstract":"\nPurpose\nWith the continuous growth of online education, the quality issue of online educational videos has become increasingly prominent, causing students in online learning to face the dilemma of knowledge confusion. The existing mechanisms for controlling the quality of online educational videos suffer from subjectivity and low timeliness. Monitoring the quality of online educational videos involves analyzing metadata features and log data, which is an important aspect. With the development of artificial intelligence technology, deep learning techniques with strong predictive capabilities can provide new methods for predicting the quality of online educational videos, effectively overcoming the shortcomings of existing methods. The purpose of this study is to find a deep neural network that can model the dynamic and static features of the video itself, as well as the relationships between videos, to achieve dynamic monitoring of the quality of online educational videos.\n\n\nDesign/methodology/approach\nThe quality of a video cannot be directly measured. According to previous research, the authors use engagement to represent the level of video quality. Engagement is the normalized participation time, which represents the degree to which learners tend to participate in the video. Based on existing public data sets, this study designs an online educational video engagement prediction model based on dynamic graph neural networks (DGNNs). The model is trained based on the video’s static features and dynamic features generated after its release by constructing dynamic graph data. The model includes a spatiotemporal feature extraction layer composed of DGNNs, which can effectively extract the time and space features contained in the video's dynamic graph data. The trained model is used to predict the engagement level of learners with the video on day T after its release, thereby achieving dynamic monitoring of video quality.\n\n\nFindings\nModels with spatiotemporal feature extraction layers consisting of four types of DGNNs can accurately predict the engagement level of online educational videos. Of these, the model using the temporal graph convolutional neural network has the smallest prediction error. In dynamic graph construction, using cosine similarity and Euclidean distance functions with reasonable threshold settings can construct a structurally appropriate dynamic graph. In the training of this model, the amount of historical time series data used will affect the model’s predictive performance. The more historical time series data used, the smaller the prediction error of the trained model.\n\n\nResearch limitations/implications\nA limitation of this study is that not all video data in the data set was used to construct the dynamic graph due to memory constraints. In addition, the DGNNs used in the spatiotemporal feature extraction layer are relatively conventional.\n\n\nOriginality/value\nIn this study, the authors propose an online educational video engagement prediction model based on DGNNs, which can achieve the dynamic monitoring of video quality. The model can be applied as part of a video quality monitoring mechanism for various online educational resource platforms.\n","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":" ","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2023-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Web Information Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1108/ijwis-05-2023-0083","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose With the continuous growth of online education, the quality issue of online educational videos has become increasingly prominent, causing students in online learning to face the dilemma of knowledge confusion. The existing mechanisms for controlling the quality of online educational videos suffer from subjectivity and low timeliness. Monitoring the quality of online educational videos involves analyzing metadata features and log data, which is an important aspect. With the development of artificial intelligence technology, deep learning techniques with strong predictive capabilities can provide new methods for predicting the quality of online educational videos, effectively overcoming the shortcomings of existing methods. The purpose of this study is to find a deep neural network that can model the dynamic and static features of the video itself, as well as the relationships between videos, to achieve dynamic monitoring of the quality of online educational videos. Design/methodology/approach The quality of a video cannot be directly measured. According to previous research, the authors use engagement to represent the level of video quality. Engagement is the normalized participation time, which represents the degree to which learners tend to participate in the video. Based on existing public data sets, this study designs an online educational video engagement prediction model based on dynamic graph neural networks (DGNNs). The model is trained based on the video’s static features and dynamic features generated after its release by constructing dynamic graph data. The model includes a spatiotemporal feature extraction layer composed of DGNNs, which can effectively extract the time and space features contained in the video's dynamic graph data. The trained model is used to predict the engagement level of learners with the video on day T after its release, thereby achieving dynamic monitoring of video quality. Findings Models with spatiotemporal feature extraction layers consisting of four types of DGNNs can accurately predict the engagement level of online educational videos. Of these, the model using the temporal graph convolutional neural network has the smallest prediction error. In dynamic graph construction, using cosine similarity and Euclidean distance functions with reasonable threshold settings can construct a structurally appropriate dynamic graph. In the training of this model, the amount of historical time series data used will affect the model’s predictive performance. The more historical time series data used, the smaller the prediction error of the trained model. Research limitations/implications A limitation of this study is that not all video data in the data set was used to construct the dynamic graph due to memory constraints. In addition, the DGNNs used in the spatiotemporal feature extraction layer are relatively conventional. Originality/value In this study, the authors propose an online educational video engagement prediction model based on DGNNs, which can achieve the dynamic monitoring of video quality. The model can be applied as part of a video quality monitoring mechanism for various online educational resource platforms.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于动态图神经网络的在线教育视频参与度预测
目的随着在线教育的不断发展,在线教育视频的质量问题日益突出,导致在线学习的学生面临知识混乱的困境。现有的在线教育视频质量控制机制存在主观性和时效性差的问题。在线教育视频的质量监控包括分析元数据特征和日志数据,这是一个重要方面。随着人工智能技术的发展,具有强大预测能力的深度学习技术可以为在线教育视频的质量预测提供新的方法,有效地克服了现有方法的不足。本研究的目的是找到一个能够对视频本身的动态和静态特征以及视频之间的关系进行建模的深度神经网络,以实现对在线教育视频质量的动态监控。设计/方法论/方法视频的质量无法直接衡量。根据之前的研究,作者使用参与度来表示视频质量的水平。参与度是标准化的参与时间,它代表了学习者参与视频的程度。基于现有的公共数据集,本研究设计了一个基于动态图神经网络的在线教育视频参与度预测模型。该模型基于视频的静态特征和发布后通过构建动态图数据生成的动态特征进行训练。该模型包括一个由DGNN组成的时空特征提取层,可以有效地提取视频动态图数据中包含的时间和空间特征。训练后的模型用于预测视频发布后第T天学习者对视频的参与程度,从而实现对视频质量的动态监控。由四种类型的DGNN组成的时空特征提取层的FindingsModel可以准确预测在线教育视频的参与程度。其中,使用时间图卷积神经网络的模型具有最小的预测误差。在动态图构造中,利用余弦相似性和欧几里得距离函数,在合理的阈值设置下,可以构造出结构合适的动态图。在该模型的训练中,使用的历史时间序列数据量将影响模型的预测性能。使用的历史时间序列数据越多,训练模型的预测误差就越小。研究局限性/含义本研究的局限性在于,由于内存限制,并非数据集中的所有视频数据都用于构建动态图。此外,在时空特征提取层中使用的DGNN是相对传统的。原创性/价值在本研究中,作者提出了一种基于DGNN的在线教育视频参与度预测模型,该模型可以实现对视频质量的动态监控。该模型可以作为各种在线教育资源平台的视频质量监控机制的一部分来应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
International Journal of Web Information Systems
International Journal of Web Information Systems COMPUTER SCIENCE, INFORMATION SYSTEMS-
CiteScore
4.60
自引率
0.00%
发文量
19
期刊介绍: The Global Information Infrastructure is a daily reality. In spite of the many applications in all domains of our societies: e-business, e-commerce, e-learning, e-science, and e-government, for instance, and in spite of the tremendous advances by engineers and scientists, the seamless development of Web information systems and services remains a major challenge. The journal examines how current shared vision for the future is one of semantically-rich information and service oriented architecture for global information systems. This vision is at the convergence of progress in technologies such as XML, Web services, RDF, OWL, of multimedia, multimodal, and multilingual information retrieval, and of distributed, mobile and ubiquitous computing. Topicality While the International Journal of Web Information Systems covers a broad range of topics, the journal welcomes papers that provide a perspective on all aspects of Web information systems: Web semantics and Web dynamics, Web mining and searching, Web databases and Web data integration, Web-based commerce and e-business, Web collaboration and distributed computing, Internet computing and networks, performance of Web applications, and Web multimedia services and Web-based education.
期刊最新文献
ImageNet classification with Raspberry Pis: federated learning algorithms of local classifiers A review of in-memory computing for machine learning: architectures, options Efficient knowledge distillation for remote sensing image classification: a CNN-based approach FedACQ: adaptive clustering quantization of model parameters in federated learning A systematic literature review of authorization and access control requirements and current state of the art for different database models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1