Soft prompt-tuning for unsupervised domain adaptation via self-supervision

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neurocomputing Pub Date : 2024-11-28 DOI:10.1016/j.neucom.2024.129008
Yi Zhu , Shuqin Wang , Yun Li , Yunhao Yuan , Jipeng Qiang
{"title":"Soft prompt-tuning for unsupervised domain adaptation via self-supervision","authors":"Yi Zhu ,&nbsp;Shuqin Wang ,&nbsp;Yun Li ,&nbsp;Yunhao Yuan ,&nbsp;Jipeng Qiang","doi":"10.1016/j.neucom.2024.129008","DOIUrl":null,"url":null,"abstract":"<div><div>Unsupervised domain adaptation methods aim to facilitate learning tasks in unlabeled target domains using labeled information from related source domains. Recently, prompt-tuning has emerged as a powerful instrument to incorporate templates that reformulate input examples into equivalent cloze-style phrases. However, there are still two great challenges for domain adaptation: (1) Existing prompt-tuning methods only rely on the general knowledge distributed in upstream pre-trained language models to alleviate the domain discrepancy. How to incorporate specific features in the source and target domains into prompt-tuning model is still divergent and under-explored; (2) In the prompt-tuning, either the crafted template methods are time-consuming and labor-intensive, or automatic prompt generation methods cannot achieve satisfied performance. To address these issues, in this paper, we propose an innovative Soft Prompt-tuning method for Unsupervised Domain Adaptation via Self-Supervision, which combines two novel ideas: Firstly, instead of only stimulating knowledge distributed in the pre-trained model, we further employ hierarchically clustered optimization strategies in a self-supervised manner to retrieve knowledge for the verbalizer construction in prompt-tuning. Secondly, we construct prompts with the special designed verbalizer that facilitate the transfer of learning representations across domains, which can consider both the automatic template generation and cross-domain classification performance. Extensive experimental results demonstrate that our method even outperforms SOTA baselines that utilize external open knowledge with much less computational time.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 129008"},"PeriodicalIF":5.5000,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S092523122401779X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Unsupervised domain adaptation methods aim to facilitate learning tasks in unlabeled target domains using labeled information from related source domains. Recently, prompt-tuning has emerged as a powerful instrument to incorporate templates that reformulate input examples into equivalent cloze-style phrases. However, there are still two great challenges for domain adaptation: (1) Existing prompt-tuning methods only rely on the general knowledge distributed in upstream pre-trained language models to alleviate the domain discrepancy. How to incorporate specific features in the source and target domains into prompt-tuning model is still divergent and under-explored; (2) In the prompt-tuning, either the crafted template methods are time-consuming and labor-intensive, or automatic prompt generation methods cannot achieve satisfied performance. To address these issues, in this paper, we propose an innovative Soft Prompt-tuning method for Unsupervised Domain Adaptation via Self-Supervision, which combines two novel ideas: Firstly, instead of only stimulating knowledge distributed in the pre-trained model, we further employ hierarchically clustered optimization strategies in a self-supervised manner to retrieve knowledge for the verbalizer construction in prompt-tuning. Secondly, we construct prompts with the special designed verbalizer that facilitate the transfer of learning representations across domains, which can consider both the automatic template generation and cross-domain classification performance. Extensive experimental results demonstrate that our method even outperforms SOTA baselines that utilize external open knowledge with much less computational time.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于自监督的无监督域自适应软提示调整
无监督域自适应方法的目的是利用相关源域的标记信息来促进无标记目标域的学习任务。最近,提示调优已经成为一种强大的工具,可以将模板合并,将输入示例重新表述为等价的完形式短语。然而,领域自适应仍然存在两大挑战:(1)现有的提示调优方法仅依赖于分布在上游预训练语言模型中的一般知识来缓解领域差异。如何将源域和目标域的特定特征融合到提示调优模型中,目前仍存在分歧和探索不足;(2)在提示调优中,手工制作的模板方法耗时费力,或者自动生成提示的方法无法达到满意的性能。针对这些问题,本文提出了一种创新的基于自监督的无监督领域自适应软提示调谐方法,该方法结合了两个新思想:首先,我们不再仅仅刺激预训练模型中分布的知识,而是进一步采用自监督方式的分层聚类优化策略来检索提示调谐中的语言器构建的知识。其次,我们使用特殊设计的语言表达器构建提示符,促进学习表征的跨域迁移,同时考虑模板的自动生成和跨域分类性能。大量的实验结果表明,我们的方法甚至优于利用外部开放知识的SOTA基线,计算时间更少。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
期刊最新文献
Monocular thermal SLAM with neural radiance fields for 3D scene reconstruction Learning a more compact representation for low-rank tensor completion An HVS-derived network for assessing the quality of camouflaged targets with feature fusion Global Span Semantic Dependency Awareness and Filtering Network for nested named entity recognition A user behavior-aware multi-task learning model for enhanced short video recommendation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1