Yi Zhu , Shuqin Wang , Yun Li , Yunhao Yuan , Jipeng Qiang
{"title":"基于自监督的无监督域自适应软提示调整","authors":"Yi Zhu , Shuqin Wang , Yun Li , Yunhao Yuan , Jipeng Qiang","doi":"10.1016/j.neucom.2024.129008","DOIUrl":null,"url":null,"abstract":"<div><div>Unsupervised domain adaptation methods aim to facilitate learning tasks in unlabeled target domains using labeled information from related source domains. Recently, prompt-tuning has emerged as a powerful instrument to incorporate templates that reformulate input examples into equivalent cloze-style phrases. However, there are still two great challenges for domain adaptation: (1) Existing prompt-tuning methods only rely on the general knowledge distributed in upstream pre-trained language models to alleviate the domain discrepancy. How to incorporate specific features in the source and target domains into prompt-tuning model is still divergent and under-explored; (2) In the prompt-tuning, either the crafted template methods are time-consuming and labor-intensive, or automatic prompt generation methods cannot achieve satisfied performance. To address these issues, in this paper, we propose an innovative Soft Prompt-tuning method for Unsupervised Domain Adaptation via Self-Supervision, which combines two novel ideas: Firstly, instead of only stimulating knowledge distributed in the pre-trained model, we further employ hierarchically clustered optimization strategies in a self-supervised manner to retrieve knowledge for the verbalizer construction in prompt-tuning. Secondly, we construct prompts with the special designed verbalizer that facilitate the transfer of learning representations across domains, which can consider both the automatic template generation and cross-domain classification performance. Extensive experimental results demonstrate that our method even outperforms SOTA baselines that utilize external open knowledge with much less computational time.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 129008"},"PeriodicalIF":5.5000,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Soft prompt-tuning for unsupervised domain adaptation via self-supervision\",\"authors\":\"Yi Zhu , Shuqin Wang , Yun Li , Yunhao Yuan , Jipeng Qiang\",\"doi\":\"10.1016/j.neucom.2024.129008\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Unsupervised domain adaptation methods aim to facilitate learning tasks in unlabeled target domains using labeled information from related source domains. Recently, prompt-tuning has emerged as a powerful instrument to incorporate templates that reformulate input examples into equivalent cloze-style phrases. However, there are still two great challenges for domain adaptation: (1) Existing prompt-tuning methods only rely on the general knowledge distributed in upstream pre-trained language models to alleviate the domain discrepancy. How to incorporate specific features in the source and target domains into prompt-tuning model is still divergent and under-explored; (2) In the prompt-tuning, either the crafted template methods are time-consuming and labor-intensive, or automatic prompt generation methods cannot achieve satisfied performance. To address these issues, in this paper, we propose an innovative Soft Prompt-tuning method for Unsupervised Domain Adaptation via Self-Supervision, which combines two novel ideas: Firstly, instead of only stimulating knowledge distributed in the pre-trained model, we further employ hierarchically clustered optimization strategies in a self-supervised manner to retrieve knowledge for the verbalizer construction in prompt-tuning. Secondly, we construct prompts with the special designed verbalizer that facilitate the transfer of learning representations across domains, which can consider both the automatic template generation and cross-domain classification performance. Extensive experimental results demonstrate that our method even outperforms SOTA baselines that utilize external open knowledge with much less computational time.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"617 \",\"pages\":\"Article 129008\"},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2024-11-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S092523122401779X\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S092523122401779X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Soft prompt-tuning for unsupervised domain adaptation via self-supervision
Unsupervised domain adaptation methods aim to facilitate learning tasks in unlabeled target domains using labeled information from related source domains. Recently, prompt-tuning has emerged as a powerful instrument to incorporate templates that reformulate input examples into equivalent cloze-style phrases. However, there are still two great challenges for domain adaptation: (1) Existing prompt-tuning methods only rely on the general knowledge distributed in upstream pre-trained language models to alleviate the domain discrepancy. How to incorporate specific features in the source and target domains into prompt-tuning model is still divergent and under-explored; (2) In the prompt-tuning, either the crafted template methods are time-consuming and labor-intensive, or automatic prompt generation methods cannot achieve satisfied performance. To address these issues, in this paper, we propose an innovative Soft Prompt-tuning method for Unsupervised Domain Adaptation via Self-Supervision, which combines two novel ideas: Firstly, instead of only stimulating knowledge distributed in the pre-trained model, we further employ hierarchically clustered optimization strategies in a self-supervised manner to retrieve knowledge for the verbalizer construction in prompt-tuning. Secondly, we construct prompts with the special designed verbalizer that facilitate the transfer of learning representations across domains, which can consider both the automatic template generation and cross-domain classification performance. Extensive experimental results demonstrate that our method even outperforms SOTA baselines that utilize external open knowledge with much less computational time.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.