HierLLM:用于问题推荐的分层大语言模型

Yuxuan Liu, Haipeng Liu, Ting Long
{"title":"HierLLM:用于问题推荐的分层大语言模型","authors":"Yuxuan Liu, Haipeng Liu, Ting Long","doi":"arxiv-2409.06177","DOIUrl":null,"url":null,"abstract":"Question recommendation is a task that sequentially recommends questions for\nstudents to enhance their learning efficiency. That is, given the learning\nhistory and learning target of a student, a question recommender is supposed to\nselect the question that will bring the most improvement for students. Previous\nmethods typically model the question recommendation as a sequential\ndecision-making problem, estimating students' learning state with the learning\nhistory, and feeding the learning state with the learning target to a neural\nnetwork to select the recommended question from a question set. However,\nprevious methods are faced with two challenges: (1) learning history is\nunavailable in the cold start scenario, which makes the recommender generate\ninappropriate recommendations; (2) the size of the question set is much large,\nwhich makes it difficult for the recommender to select the best question\nprecisely. To address the challenges, we propose a method called hierarchical\nlarge language model for question recommendation (HierLLM), which is a\nLLM-based hierarchical structure. The LLM-based structure enables HierLLM to\ntackle the cold start issue with the strong reasoning abilities of LLM. The\nhierarchical structure takes advantage of the fact that the number of concepts\nis significantly smaller than the number of questions, narrowing the range of\nselectable questions by first identifying the relevant concept for the\nto-recommend question, and then selecting the recommended question based on\nthat concept. This hierarchical structure reduces the difficulty of the\nrecommendation.To investigate the performance of HierLLM, we conduct extensive\nexperiments, and the results demonstrate the outstanding performance of\nHierLLM.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HierLLM: Hierarchical Large Language Model for Question Recommendation\",\"authors\":\"Yuxuan Liu, Haipeng Liu, Ting Long\",\"doi\":\"arxiv-2409.06177\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Question recommendation is a task that sequentially recommends questions for\\nstudents to enhance their learning efficiency. That is, given the learning\\nhistory and learning target of a student, a question recommender is supposed to\\nselect the question that will bring the most improvement for students. Previous\\nmethods typically model the question recommendation as a sequential\\ndecision-making problem, estimating students' learning state with the learning\\nhistory, and feeding the learning state with the learning target to a neural\\nnetwork to select the recommended question from a question set. However,\\nprevious methods are faced with two challenges: (1) learning history is\\nunavailable in the cold start scenario, which makes the recommender generate\\ninappropriate recommendations; (2) the size of the question set is much large,\\nwhich makes it difficult for the recommender to select the best question\\nprecisely. To address the challenges, we propose a method called hierarchical\\nlarge language model for question recommendation (HierLLM), which is a\\nLLM-based hierarchical structure. The LLM-based structure enables HierLLM to\\ntackle the cold start issue with the strong reasoning abilities of LLM. The\\nhierarchical structure takes advantage of the fact that the number of concepts\\nis significantly smaller than the number of questions, narrowing the range of\\nselectable questions by first identifying the relevant concept for the\\nto-recommend question, and then selecting the recommended question based on\\nthat concept. This hierarchical structure reduces the difficulty of the\\nrecommendation.To investigate the performance of HierLLM, we conduct extensive\\nexperiments, and the results demonstrate the outstanding performance of\\nHierLLM.\",\"PeriodicalId\":501281,\"journal\":{\"name\":\"arXiv - CS - Information Retrieval\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Information Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06177\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06177","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

问题推荐是一项按顺序为学生推荐问题以提高其学习效率的任务。也就是说,在给定学生的学习历史和学习目标的情况下,问题推荐者应该选择能给学生带来最大进步的问题。以往的方法通常将问题推荐作为一个连续决策问题来建模,利用学习历史估计学生的学习状态,然后将学习状态和学习目标输入神经网络,从问题集中选择推荐的问题。然而,以往的方法面临两个挑战:(1)学习历史在冷启动场景下不可用,这使得推荐器生成不恰当的推荐;(2)问题集的规模很大,这使得推荐器难以精确地选择最佳问题。针对上述问题,我们提出了一种基于 LLM 的分层结构的问题推荐方法,即分层大语言模型(HierLLM)。基于 LLM 的结构使 HierLLM 能够利用 LLM 的强大推理能力解决冷启动问题。分层结构利用了概念数明显少于问题数这一事实,通过首先确定要推荐问题的相关概念,然后根据该概念选择推荐问题,从而缩小了可选问题的范围。为了研究 HierLLM 的性能,我们进行了大量的实验,结果证明了 HierLLM 的出色性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
HierLLM: Hierarchical Large Language Model for Question Recommendation
Question recommendation is a task that sequentially recommends questions for students to enhance their learning efficiency. That is, given the learning history and learning target of a student, a question recommender is supposed to select the question that will bring the most improvement for students. Previous methods typically model the question recommendation as a sequential decision-making problem, estimating students' learning state with the learning history, and feeding the learning state with the learning target to a neural network to select the recommended question from a question set. However, previous methods are faced with two challenges: (1) learning history is unavailable in the cold start scenario, which makes the recommender generate inappropriate recommendations; (2) the size of the question set is much large, which makes it difficult for the recommender to select the best question precisely. To address the challenges, we propose a method called hierarchical large language model for question recommendation (HierLLM), which is a LLM-based hierarchical structure. The LLM-based structure enables HierLLM to tackle the cold start issue with the strong reasoning abilities of LLM. The hierarchical structure takes advantage of the fact that the number of concepts is significantly smaller than the number of questions, narrowing the range of selectable questions by first identifying the relevant concept for the to-recommend question, and then selecting the recommended question based on that concept. This hierarchical structure reduces the difficulty of the recommendation.To investigate the performance of HierLLM, we conduct extensive experiments, and the results demonstrate the outstanding performance of HierLLM.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Decoding Style: Efficient Fine-Tuning of LLMs for Image-Guided Outfit Recommendation with Preference Retrieve, Annotate, Evaluate, Repeat: Leveraging Multimodal LLMs for Large-Scale Product Retrieval Evaluation Active Reconfigurable Intelligent Surface Empowered Synthetic Aperture Radar Imaging FLARE: Fusing Language Models and Collaborative Architectures for Recommender Enhancement Basket-Enhanced Heterogenous Hypergraph for Price-Sensitive Next Basket Recommendation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1