{"title":"LLMs + Persona-Plug = 个性化 LLMs","authors":"Jiongnan Liu, Yutao Zhu, Shuting Wang, Xiaochi Wei, Erxue Min, Yu Lu, Shuaiqiang Wang, Dawei Yin, Zhicheng Dou","doi":"arxiv-2409.11901","DOIUrl":null,"url":null,"abstract":"Personalization plays a critical role in numerous language tasks and\napplications, since users with the same requirements may prefer diverse outputs\nbased on their individual interests. This has led to the development of various\npersonalized approaches aimed at adapting large language models (LLMs) to\ngenerate customized outputs aligned with user preferences. Some of them involve\nfine-tuning a unique personalized LLM for each user, which is too expensive for\nwidespread application. Alternative approaches introduce personalization\ninformation in a plug-and-play manner by retrieving the user's relevant\nhistorical texts as demonstrations. However, this retrieval-based strategy may\nbreak the continuity of the user history and fail to capture the user's overall\nstyles and patterns, hence leading to sub-optimal performance. To address these\nchallenges, we propose a novel personalized LLM model, \\ours{}. It constructs a\nuser-specific embedding for each individual by modeling all her historical\ncontexts through a lightweight plug-in user embedder module. By attaching this\nembedding to the task input, LLMs can better understand and capture user habits\nand preferences, thereby producing more personalized outputs without tuning\ntheir own parameters. Extensive experiments on various tasks in the language\nmodel personalization (LaMP) benchmark demonstrate that the proposed model\nsignificantly outperforms existing personalized LLM approaches.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"3 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LLMs + Persona-Plug = Personalized LLMs\",\"authors\":\"Jiongnan Liu, Yutao Zhu, Shuting Wang, Xiaochi Wei, Erxue Min, Yu Lu, Shuaiqiang Wang, Dawei Yin, Zhicheng Dou\",\"doi\":\"arxiv-2409.11901\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Personalization plays a critical role in numerous language tasks and\\napplications, since users with the same requirements may prefer diverse outputs\\nbased on their individual interests. This has led to the development of various\\npersonalized approaches aimed at adapting large language models (LLMs) to\\ngenerate customized outputs aligned with user preferences. Some of them involve\\nfine-tuning a unique personalized LLM for each user, which is too expensive for\\nwidespread application. Alternative approaches introduce personalization\\ninformation in a plug-and-play manner by retrieving the user's relevant\\nhistorical texts as demonstrations. However, this retrieval-based strategy may\\nbreak the continuity of the user history and fail to capture the user's overall\\nstyles and patterns, hence leading to sub-optimal performance. To address these\\nchallenges, we propose a novel personalized LLM model, \\\\ours{}. It constructs a\\nuser-specific embedding for each individual by modeling all her historical\\ncontexts through a lightweight plug-in user embedder module. By attaching this\\nembedding to the task input, LLMs can better understand and capture user habits\\nand preferences, thereby producing more personalized outputs without tuning\\ntheir own parameters. Extensive experiments on various tasks in the language\\nmodel personalization (LaMP) benchmark demonstrate that the proposed model\\nsignificantly outperforms existing personalized LLM approaches.\",\"PeriodicalId\":501030,\"journal\":{\"name\":\"arXiv - CS - Computation and Language\",\"volume\":\"3 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computation and Language\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11901\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11901","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Personalization plays a critical role in numerous language tasks and
applications, since users with the same requirements may prefer diverse outputs
based on their individual interests. This has led to the development of various
personalized approaches aimed at adapting large language models (LLMs) to
generate customized outputs aligned with user preferences. Some of them involve
fine-tuning a unique personalized LLM for each user, which is too expensive for
widespread application. Alternative approaches introduce personalization
information in a plug-and-play manner by retrieving the user's relevant
historical texts as demonstrations. However, this retrieval-based strategy may
break the continuity of the user history and fail to capture the user's overall
styles and patterns, hence leading to sub-optimal performance. To address these
challenges, we propose a novel personalized LLM model, \ours{}. It constructs a
user-specific embedding for each individual by modeling all her historical
contexts through a lightweight plug-in user embedder module. By attaching this
embedding to the task input, LLMs can better understand and capture user habits
and preferences, thereby producing more personalized outputs without tuning
their own parameters. Extensive experiments on various tasks in the language
model personalization (LaMP) benchmark demonstrate that the proposed model
significantly outperforms existing personalized LLM approaches.