{"title":"引导式对比领域适应","authors":"Yan Jia, Yuqing Cheng, Peng Qiao","doi":"10.1007/s12293-024-00422-6","DOIUrl":null,"url":null,"abstract":"<p>Self-supervised learning, particularly through contrastive learning, has shown significant promise in vision tasks. Although effective, contrastive learning faces the issue of false negatives, particularly under domain shifts in domain adaptation scenarios. The Bootstrap Your Own Latent approach, with its asymmetric structure and avoidance of unnecessary negative samples, offers a foundation to address this issue, which remains underexplored in domain adaptation. We introduce an asymmetrically structured network, the Bootstrap Contrastive Domain Adaptation (BCDA), that innovatively applies contrastive learning to domain adaptation. BCDA utilizes a bootstrap clustering positive sampling strategy to ensure stable, end-to-end domain adaptation, preventing model collapse often seen in asymmetric networks. This method not only aligns domains internally through mean square loss but also enhances semantic inter-domain alignment, effectively eliminating false negatives. Our approach, BCDA, represents the first foray into non-contrastive domain adaptation and could serve as a foundational model for future studies. It shows potential to supersede contrastive domain adaptation methods in eliminating false negatives, evidenced by high-level results on three well-known domain adaptation benchmark datasets.</p>","PeriodicalId":48780,"journal":{"name":"Memetic Computing","volume":"10 1","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Bootstrap contrastive domain adaptation\",\"authors\":\"Yan Jia, Yuqing Cheng, Peng Qiao\",\"doi\":\"10.1007/s12293-024-00422-6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Self-supervised learning, particularly through contrastive learning, has shown significant promise in vision tasks. Although effective, contrastive learning faces the issue of false negatives, particularly under domain shifts in domain adaptation scenarios. The Bootstrap Your Own Latent approach, with its asymmetric structure and avoidance of unnecessary negative samples, offers a foundation to address this issue, which remains underexplored in domain adaptation. We introduce an asymmetrically structured network, the Bootstrap Contrastive Domain Adaptation (BCDA), that innovatively applies contrastive learning to domain adaptation. BCDA utilizes a bootstrap clustering positive sampling strategy to ensure stable, end-to-end domain adaptation, preventing model collapse often seen in asymmetric networks. This method not only aligns domains internally through mean square loss but also enhances semantic inter-domain alignment, effectively eliminating false negatives. Our approach, BCDA, represents the first foray into non-contrastive domain adaptation and could serve as a foundational model for future studies. It shows potential to supersede contrastive domain adaptation methods in eliminating false negatives, evidenced by high-level results on three well-known domain adaptation benchmark datasets.</p>\",\"PeriodicalId\":48780,\"journal\":{\"name\":\"Memetic Computing\",\"volume\":\"10 1\",\"pages\":\"\"},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2024-08-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Memetic Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s12293-024-00422-6\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Memetic Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12293-024-00422-6","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
自我监督学习,特别是通过对比学习,在视觉任务中显示出了巨大的前景。对比学习虽然有效,但也面临着假阴性的问题,尤其是在领域适应场景中的领域转移情况下。Bootstrap Your Own Latent 方法采用非对称结构,避免了不必要的负样本,为解决这一问题提供了基础,而这一问题在领域适应中仍未得到充分探索。我们引入了一种非对称结构网络--自举对比领域适应(BCDA),它创新性地将对比学习应用于领域适应。BCDA 采用自举聚类正向采样策略,确保稳定的端到端领域适应,防止非对称网络中常见的模型崩溃。这种方法不仅能通过均方损失实现域内部对齐,还能增强域间语义对齐,有效消除假阴性。我们的 BCDA 方法是对非对比域适应性的首次尝试,可作为未来研究的基础模型。在三个著名的领域适应基准数据集上取得的高水平结果表明,它在消除假否定方面具有超越对比领域适应方法的潜力。
Self-supervised learning, particularly through contrastive learning, has shown significant promise in vision tasks. Although effective, contrastive learning faces the issue of false negatives, particularly under domain shifts in domain adaptation scenarios. The Bootstrap Your Own Latent approach, with its asymmetric structure and avoidance of unnecessary negative samples, offers a foundation to address this issue, which remains underexplored in domain adaptation. We introduce an asymmetrically structured network, the Bootstrap Contrastive Domain Adaptation (BCDA), that innovatively applies contrastive learning to domain adaptation. BCDA utilizes a bootstrap clustering positive sampling strategy to ensure stable, end-to-end domain adaptation, preventing model collapse often seen in asymmetric networks. This method not only aligns domains internally through mean square loss but also enhances semantic inter-domain alignment, effectively eliminating false negatives. Our approach, BCDA, represents the first foray into non-contrastive domain adaptation and could serve as a foundational model for future studies. It shows potential to supersede contrastive domain adaptation methods in eliminating false negatives, evidenced by high-level results on three well-known domain adaptation benchmark datasets.
Memetic ComputingCOMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-OPERATIONS RESEARCH & MANAGEMENT SCIENCE
CiteScore
6.80
自引率
12.80%
发文量
31
期刊介绍:
Memes have been defined as basic units of transferrable information that reside in the brain and are propagated across populations through the process of imitation. From an algorithmic point of view, memes have come to be regarded as building-blocks of prior knowledge, expressed in arbitrary computational representations (e.g., local search heuristics, fuzzy rules, neural models, etc.), that have been acquired through experience by a human or machine, and can be imitated (i.e., reused) across problems.
The Memetic Computing journal welcomes papers incorporating the aforementioned socio-cultural notion of memes into artificial systems, with particular emphasis on enhancing the efficacy of computational and artificial intelligence techniques for search, optimization, and machine learning through explicit prior knowledge incorporation. The goal of the journal is to thus be an outlet for high quality theoretical and applied research on hybrid, knowledge-driven computational approaches that may be characterized under any of the following categories of memetics:
Type 1: General-purpose algorithms integrated with human-crafted heuristics that capture some form of prior domain knowledge; e.g., traditional memetic algorithms hybridizing evolutionary global search with a problem-specific local search.
Type 2: Algorithms with the ability to automatically select, adapt, and reuse the most appropriate heuristics from a diverse pool of available choices; e.g., learning a mapping between global search operators and multiple local search schemes, given an optimization problem at hand.
Type 3: Algorithms that autonomously learn with experience, adaptively reusing data and/or machine learning models drawn from related problems as prior knowledge in new target tasks of interest; examples include, but are not limited to, transfer learning and optimization, multi-task learning and optimization, or any other multi-X evolutionary learning and optimization methodologies.