用于分析在线知识社会构建的深度学习模型

IF 2.8 Q1 EDUCATION & EDUCATIONAL RESEARCH Online Learning Pub Date : 2023-12-01 DOI:10.24059/olj.v27i4.4055
C. Gunawardena, Yan Chen, Flor Nick, Sanchez Damien
{"title":"用于分析在线知识社会构建的深度学习模型","authors":"C. Gunawardena, Yan Chen, Flor Nick, Sanchez Damien","doi":"10.24059/olj.v27i4.4055","DOIUrl":null,"url":null,"abstract":"Gunawardena et al.’s (1997) Interaction Analysis Model (IAM) is one of the most frequently employed frameworks to guide the qualitative analysis of social construction of knowledge online. However, qualitative analysis is time consuming, and precludes immediate feedback to revise online courses while being delivered. To expedite analysis with a large dataset, this study explores how two neural network architectures—a feed-forward network (Doc2Vec) and a large language model transformer (BERT)—could automatically predict phases of knowledge construction using IAM. The methods interrogated the extent to which the artificial neural networks’ predictions of IAM Phases approximated a human coder’s qualitative analysis. Key results indicate an accuracy of 21.55% for Doc2Vec phases I-V, 43% for fine-tuning a pre-trained large language model (LLM), and 52.79% for prompt-engineering an LLM. Future studies for improving accuracy should consider either training the models with larger datasets or focusing on the design of prompts to improve classification accuracy. Grounded on social constructivism and IAM, this study has implications for designing and supporting online collaborative learning where the goal is social construction of knowledge. Moreover, it has teaching implications for guiding the design of AI tools that provide beneficial feedback for both students and course designers.","PeriodicalId":54195,"journal":{"name":"Online Learning","volume":" 20","pages":""},"PeriodicalIF":2.8000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Learning Models for Analyzing Social Construction of Knowledge Online\",\"authors\":\"C. Gunawardena, Yan Chen, Flor Nick, Sanchez Damien\",\"doi\":\"10.24059/olj.v27i4.4055\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Gunawardena et al.’s (1997) Interaction Analysis Model (IAM) is one of the most frequently employed frameworks to guide the qualitative analysis of social construction of knowledge online. However, qualitative analysis is time consuming, and precludes immediate feedback to revise online courses while being delivered. To expedite analysis with a large dataset, this study explores how two neural network architectures—a feed-forward network (Doc2Vec) and a large language model transformer (BERT)—could automatically predict phases of knowledge construction using IAM. The methods interrogated the extent to which the artificial neural networks’ predictions of IAM Phases approximated a human coder’s qualitative analysis. Key results indicate an accuracy of 21.55% for Doc2Vec phases I-V, 43% for fine-tuning a pre-trained large language model (LLM), and 52.79% for prompt-engineering an LLM. Future studies for improving accuracy should consider either training the models with larger datasets or focusing on the design of prompts to improve classification accuracy. Grounded on social constructivism and IAM, this study has implications for designing and supporting online collaborative learning where the goal is social construction of knowledge. Moreover, it has teaching implications for guiding the design of AI tools that provide beneficial feedback for both students and course designers.\",\"PeriodicalId\":54195,\"journal\":{\"name\":\"Online Learning\",\"volume\":\" 20\",\"pages\":\"\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2023-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Online Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.24059/olj.v27i4.4055\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Online Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.24059/olj.v27i4.4055","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

摘要

Gunawardena et al.(1997)的交互分析模型(IAM)是最常用的框架之一,用于指导在线知识社会建构的定性分析。然而,定性分析是耗时的,并且排除了在交付时修改在线课程的即时反馈。为了加快对大型数据集的分析,本研究探索了两种神经网络架构——前馈网络(Doc2Vec)和大型语言模型转换器(BERT)——如何使用IAM自动预测知识构建的阶段。这些方法询问了人工神经网络对IAM阶段的预测在多大程度上接近人类编码器的定性分析。关键结果表明,Doc2Vec I-V阶段的准确率为21.55%,预训练大型语言模型(LLM)的微调准确率为43%,LLM的提示工程准确率为52.79%。未来提高准确率的研究应该考虑用更大的数据集训练模型,或者关注提示符的设计来提高分类准确率。基于社会建构主义和IAM,本研究对设计和支持以知识的社会建构为目标的在线协作学习具有启示意义。此外,它对指导人工智能工具的设计具有教学意义,为学生和课程设计者提供有益的反馈。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Deep Learning Models for Analyzing Social Construction of Knowledge Online
Gunawardena et al.’s (1997) Interaction Analysis Model (IAM) is one of the most frequently employed frameworks to guide the qualitative analysis of social construction of knowledge online. However, qualitative analysis is time consuming, and precludes immediate feedback to revise online courses while being delivered. To expedite analysis with a large dataset, this study explores how two neural network architectures—a feed-forward network (Doc2Vec) and a large language model transformer (BERT)—could automatically predict phases of knowledge construction using IAM. The methods interrogated the extent to which the artificial neural networks’ predictions of IAM Phases approximated a human coder’s qualitative analysis. Key results indicate an accuracy of 21.55% for Doc2Vec phases I-V, 43% for fine-tuning a pre-trained large language model (LLM), and 52.79% for prompt-engineering an LLM. Future studies for improving accuracy should consider either training the models with larger datasets or focusing on the design of prompts to improve classification accuracy. Grounded on social constructivism and IAM, this study has implications for designing and supporting online collaborative learning where the goal is social construction of knowledge. Moreover, it has teaching implications for guiding the design of AI tools that provide beneficial feedback for both students and course designers.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Online Learning
Online Learning EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
7.40
自引率
15.00%
发文量
55
审稿时长
30 weeks
期刊最新文献
Emergent Themes from Study of a Highly Flexible Hybrid Learning Program Culturally- and Linguistically-Responsive Online Teacher Learning Professional Development "I Sing the Body Electric": Embodied Presence in the Community of Inquiry Framework Measuring Faculty Engagement in Online Formative or Whole-Person Education “But They’re Grad Students, They Should Know This”: Preliminary Findings from a Writing Center’s Hybrid Approach to Supporting Postgraduates in Qatar
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1