PTR: Prompt Tuning with Rules for Text Classification

Xu Han , Weilin Zhao , Ning Ding , Zhiyuan Liu , Maosong Sun
{"title":"PTR: Prompt Tuning with Rules for Text Classification","authors":"Xu Han ,&nbsp;Weilin Zhao ,&nbsp;Ning Ding ,&nbsp;Zhiyuan Liu ,&nbsp;Maosong Sun","doi":"10.1016/j.aiopen.2022.11.003","DOIUrl":null,"url":null,"abstract":"<div><p>Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre-trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved promising results on some few-class classification tasks, such as sentiment classification and natural language inference, manually designing prompts is cumbersome. Meanwhile, generating prompts automatically is also difficult and time-consuming. Therefore, obtaining effective prompts for complex many-class classification tasks still remains a challenge. In this paper, we propose to encode the prior knowledge of a classification task into rules, then design sub-prompts according to the rules, and finally combine the sub-prompts to handle the task. We name this <strong>P</strong>rompt <strong>T</strong>uning method with <strong>R</strong>ules “<strong>PTR</strong>”. Compared with existing prompt-based methods, PTR achieves a good trade-off between effectiveness and efficiency in building prompts. We conduct experiments on three many-class classification tasks, including relation classification, entity typing, and intent classification. The results show that PTR outperforms both vanilla and prompt tuning baselines, indicating the effectiveness of utilizing rules for prompt tuning. The source code of PTR is available at <span>https://github.com/thunlp/PTR</span><svg><path></path></svg>.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 182-192"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651022000183/pdfft?md5=00c56e1aac330e25c378cff01e2ca394&pid=1-s2.0-S2666651022000183-main.pdf","citationCount":"291","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI Open","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666651022000183","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 291

Abstract

Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre-trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved promising results on some few-class classification tasks, such as sentiment classification and natural language inference, manually designing prompts is cumbersome. Meanwhile, generating prompts automatically is also difficult and time-consuming. Therefore, obtaining effective prompts for complex many-class classification tasks still remains a challenge. In this paper, we propose to encode the prior knowledge of a classification task into rules, then design sub-prompts according to the rules, and finally combine the sub-prompts to handle the task. We name this Prompt Tuning method with Rules “PTR”. Compared with existing prompt-based methods, PTR achieves a good trade-off between effectiveness and efficiency in building prompts. We conduct experiments on three many-class classification tasks, including relation classification, entity typing, and intent classification. The results show that PTR outperforms both vanilla and prompt tuning baselines, indicating the effectiveness of utilizing rules for prompt tuning. The source code of PTR is available at https://github.com/thunlp/PTR.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
PTR:使用文本分类规则进行提示调整
近年来,快速调优已被广泛应用于激发预训练语言模型(PLM)中丰富的知识,以服务于NLP任务。尽管提示调优在情感分类和自然语言推理等少数类别分类任务上取得了很好的效果,但手动设计提示是很麻烦的。同时,自动生成提示也是困难和耗时的。因此,为复杂的多类分类任务获得有效的提示仍然是一个挑战。在本文中,我们建议将分类任务的先验知识编码为规则,然后根据规则设计子提示,最后将子提示组合起来处理任务。我们用规则将此提示调整方法命名为“PTR”。与现有的基于提示的方法相比,PTR在构建提示时实现了有效性和效率之间的良好权衡。我们对三个多类分类任务进行了实验,包括关系分类、实体分类和意图分类。结果表明,PTR的性能优于普通和提示调优基线,表明了利用规则进行提示调优的有效性。PTR的源代码位于https://github.com/thunlp/PTR.
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
45.00
自引率
0.00%
发文量
0
期刊最新文献
GPT understands, too Adaptive negative representations for graph contrastive learning PM2.5 forecasting under distribution shift: A graph learning approach Enhancing neural network classification using fractional-order activation functions CPT: Colorful Prompt Tuning for pre-trained vision-language models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1