{"title":"HierLLM:用于问题推荐的分层大语言模型","authors":"Yuxuan Liu, Haipeng Liu, Ting Long","doi":"arxiv-2409.06177","DOIUrl":null,"url":null,"abstract":"Question recommendation is a task that sequentially recommends questions for\nstudents to enhance their learning efficiency. That is, given the learning\nhistory and learning target of a student, a question recommender is supposed to\nselect the question that will bring the most improvement for students. Previous\nmethods typically model the question recommendation as a sequential\ndecision-making problem, estimating students' learning state with the learning\nhistory, and feeding the learning state with the learning target to a neural\nnetwork to select the recommended question from a question set. However,\nprevious methods are faced with two challenges: (1) learning history is\nunavailable in the cold start scenario, which makes the recommender generate\ninappropriate recommendations; (2) the size of the question set is much large,\nwhich makes it difficult for the recommender to select the best question\nprecisely. To address the challenges, we propose a method called hierarchical\nlarge language model for question recommendation (HierLLM), which is a\nLLM-based hierarchical structure. The LLM-based structure enables HierLLM to\ntackle the cold start issue with the strong reasoning abilities of LLM. The\nhierarchical structure takes advantage of the fact that the number of concepts\nis significantly smaller than the number of questions, narrowing the range of\nselectable questions by first identifying the relevant concept for the\nto-recommend question, and then selecting the recommended question based on\nthat concept. This hierarchical structure reduces the difficulty of the\nrecommendation.To investigate the performance of HierLLM, we conduct extensive\nexperiments, and the results demonstrate the outstanding performance of\nHierLLM.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HierLLM: Hierarchical Large Language Model for Question Recommendation\",\"authors\":\"Yuxuan Liu, Haipeng Liu, Ting Long\",\"doi\":\"arxiv-2409.06177\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Question recommendation is a task that sequentially recommends questions for\\nstudents to enhance their learning efficiency. That is, given the learning\\nhistory and learning target of a student, a question recommender is supposed to\\nselect the question that will bring the most improvement for students. Previous\\nmethods typically model the question recommendation as a sequential\\ndecision-making problem, estimating students' learning state with the learning\\nhistory, and feeding the learning state with the learning target to a neural\\nnetwork to select the recommended question from a question set. However,\\nprevious methods are faced with two challenges: (1) learning history is\\nunavailable in the cold start scenario, which makes the recommender generate\\ninappropriate recommendations; (2) the size of the question set is much large,\\nwhich makes it difficult for the recommender to select the best question\\nprecisely. To address the challenges, we propose a method called hierarchical\\nlarge language model for question recommendation (HierLLM), which is a\\nLLM-based hierarchical structure. The LLM-based structure enables HierLLM to\\ntackle the cold start issue with the strong reasoning abilities of LLM. The\\nhierarchical structure takes advantage of the fact that the number of concepts\\nis significantly smaller than the number of questions, narrowing the range of\\nselectable questions by first identifying the relevant concept for the\\nto-recommend question, and then selecting the recommended question based on\\nthat concept. This hierarchical structure reduces the difficulty of the\\nrecommendation.To investigate the performance of HierLLM, we conduct extensive\\nexperiments, and the results demonstrate the outstanding performance of\\nHierLLM.\",\"PeriodicalId\":501281,\"journal\":{\"name\":\"arXiv - CS - Information Retrieval\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Information Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06177\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06177","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
HierLLM: Hierarchical Large Language Model for Question Recommendation
Question recommendation is a task that sequentially recommends questions for
students to enhance their learning efficiency. That is, given the learning
history and learning target of a student, a question recommender is supposed to
select the question that will bring the most improvement for students. Previous
methods typically model the question recommendation as a sequential
decision-making problem, estimating students' learning state with the learning
history, and feeding the learning state with the learning target to a neural
network to select the recommended question from a question set. However,
previous methods are faced with two challenges: (1) learning history is
unavailable in the cold start scenario, which makes the recommender generate
inappropriate recommendations; (2) the size of the question set is much large,
which makes it difficult for the recommender to select the best question
precisely. To address the challenges, we propose a method called hierarchical
large language model for question recommendation (HierLLM), which is a
LLM-based hierarchical structure. The LLM-based structure enables HierLLM to
tackle the cold start issue with the strong reasoning abilities of LLM. The
hierarchical structure takes advantage of the fact that the number of concepts
is significantly smaller than the number of questions, narrowing the range of
selectable questions by first identifying the relevant concept for the
to-recommend question, and then selecting the recommended question based on
that concept. This hierarchical structure reduces the difficulty of the
recommendation.To investigate the performance of HierLLM, we conduct extensive
experiments, and the results demonstrate the outstanding performance of
HierLLM.