Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education

Lin Gao;Jing Lu;Zekai Shao;Ziyue Lin;Shengbin Yue;Chiokit Leong;Yi Sun;Rory James Zauner;Zhongyu Wei;Siming Chen
{"title":"Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education","authors":"Lin Gao;Jing Lu;Zekai Shao;Ziyue Lin;Shengbin Yue;Chiokit Leong;Yi Sun;Rory James Zauner;Zhongyu Wei;Siming Chen","doi":"10.1109/TVCG.2024.3456145","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLMs) have shown great potential in intelligent visualization systems, especially for domain-specific applications. Integrating LLMs into visualization systems presents challenges, and we categorize these challenges into three alignments: domain problems with LLMs, visualization with LLMs, and interaction with LLMs. To achieve these alignments, we propose a framework and outline a workflow to guide the application of fine-tuned LLMs to enhance visual interactions for domain-specific tasks. These alignment challenges are critical in education because of the need for an intelligent visualization system to support beginners' self-regulated learning. Therefore, we apply the framework to education and introduce Tailor-Mind, an interactive visualization system designed to facilitate self-regulated learning for artificial intelligence beginners. Drawing on insights from a preliminary study, we identify self-regulated learning tasks and fine-tuning objectives to guide visualization design and tuning data construction. Our focus on aligning visualization with fine-tuned LLM makes Tailor-Mind more like a personalized tutor. Tailor-Mind also supports interactive recommendations to help beginners better achieve their learning goals. Model performance evaluations and user studies confirm that Tailor-Mind improves the self-regulated learning experience, effectively validating the proposed framework.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"514-524"},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10670435/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Large Language Models (LLMs) have shown great potential in intelligent visualization systems, especially for domain-specific applications. Integrating LLMs into visualization systems presents challenges, and we categorize these challenges into three alignments: domain problems with LLMs, visualization with LLMs, and interaction with LLMs. To achieve these alignments, we propose a framework and outline a workflow to guide the application of fine-tuned LLMs to enhance visual interactions for domain-specific tasks. These alignment challenges are critical in education because of the need for an intelligent visualization system to support beginners' self-regulated learning. Therefore, we apply the framework to education and introduce Tailor-Mind, an interactive visualization system designed to facilitate self-regulated learning for artificial intelligence beginners. Drawing on insights from a preliminary study, we identify self-regulated learning tasks and fine-tuning objectives to guide visualization design and tuning data construction. Our focus on aligning visualization with fine-tuned LLM makes Tailor-Mind more like a personalized tutor. Tailor-Mind also supports interactive recommendations to help beginners better achieve their learning goals. Model performance evaluations and user studies confirm that Tailor-Mind improves the self-regulated learning experience, effectively validating the proposed framework.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于可视化系统的微调大语言模型:教育中的自律学习研究
大型语言模型(LLMs)在智能可视化系统中显示出巨大的潜力,尤其是在特定领域的应用中。将 LLMs 集成到可视化系统中会带来挑战,我们将这些挑战分为三个方面:LLMs 的领域问题、LLMs 的可视化以及 LLMs 的交互。为了实现这些调整,我们提出了一个框架并概述了一个工作流程,以指导应用微调 LLMs 来增强特定领域任务的可视化交互。由于需要一个智能可视化系统来支持初学者的自我调节学习,这些调整挑战在教育领域至关重要。因此,我们将该框架应用于教育领域,并推出了旨在促进人工智能初学者自我调节学习的交互式可视化系统--Tailor-Mind。借鉴初步研究的见解,我们确定了自我调节学习任务和微调目标,以指导可视化设计和微调数据构建。我们的重点是将可视化与微调 LLM 相结合,这使得 Tailor-Mind 更像是一个个性化的导师。Tailor-Mind 还支持互动建议,帮助初学者更好地实现学习目标。模型性能评估和用户研究证实,Tailor-Mind 改善了自我调节的学习体验,有效验证了所提出的框架。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
2024 Reviewers List Errata to “DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map” The Census-Stub Graph Invariant Descriptor TimeLighting: Guided Exploration of 2D Temporal Network Projections Preface
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1