将多模态对比学习与原型领域对齐相结合,实现时间序列的无监督领域适应性调整

IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Engineering Applications of Artificial Intelligence Pub Date : 2024-08-31 DOI:10.1016/j.engappai.2024.109205
{"title":"将多模态对比学习与原型领域对齐相结合,实现时间序列的无监督领域适应性调整","authors":"","doi":"10.1016/j.engappai.2024.109205","DOIUrl":null,"url":null,"abstract":"<div><p>Unsupervised domain adaptation (UDA) addresses the challenge of transferring knowledge from a labeled source domain to an unlabeled target domain. This task is particularly critical for time series data, characterized by unique temporal dynamics. However, existing methods often fail to capture these temporal dependencies, leading to domain discrepancies and loss of semantic information. In this study, we propose a novel framework for the unsupervised domain adaptation of time series (UDATS) that integrates Multimodal Contrastive Adaptation (MCA) and Prototypical Domain Alignment (PDA). MCA leverages image encoding techniques and prompt learning to capture complex temporal patterns while preserving semantic information. PDA constructs multimodal prototypes, combining visual and textual features to align target domain samples accurately. Our framework demonstrates superior performance across various application domains, including human activity recognition, mortality prediction, and fault detection. Experiments show our method effectively addresses domain discrepancies while preserving essential semantic content, outperforming state-of-the-art models.</p></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5000,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Integrating multimodal contrastive learning with prototypical domain alignment for unsupervised domain adaptation of time series\",\"authors\":\"\",\"doi\":\"10.1016/j.engappai.2024.109205\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Unsupervised domain adaptation (UDA) addresses the challenge of transferring knowledge from a labeled source domain to an unlabeled target domain. This task is particularly critical for time series data, characterized by unique temporal dynamics. However, existing methods often fail to capture these temporal dependencies, leading to domain discrepancies and loss of semantic information. In this study, we propose a novel framework for the unsupervised domain adaptation of time series (UDATS) that integrates Multimodal Contrastive Adaptation (MCA) and Prototypical Domain Alignment (PDA). MCA leverages image encoding techniques and prompt learning to capture complex temporal patterns while preserving semantic information. PDA constructs multimodal prototypes, combining visual and textual features to align target domain samples accurately. Our framework demonstrates superior performance across various application domains, including human activity recognition, mortality prediction, and fault detection. Experiments show our method effectively addresses domain discrepancies while preserving essential semantic content, outperforming state-of-the-art models.</p></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2024-08-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197624013630\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197624013630","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

无监督域适应(UDA)解决了将知识从有标签源域转移到无标签目标域的难题。这项任务对于具有独特时间动态特征的时间序列数据尤为重要。然而,现有的方法往往无法捕捉这些时间依赖性,从而导致领域差异和语义信息的丢失。在这项研究中,我们提出了一种新的时间序列无监督域适应(UDATS)框架,它整合了多模态对比适应(MCA)和原型域对齐(PDA)。MCA 利用图像编码技术和及时学习来捕捉复杂的时间模式,同时保留语义信息。PDA 构建多模态原型,结合视觉和文本特征,对目标域样本进行精确对齐。我们的框架在人类活动识别、死亡率预测和故障检测等多个应用领域都表现出卓越的性能。实验表明,我们的方法在保留基本语义内容的同时,有效地解决了领域差异问题,表现优于最先进的模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Integrating multimodal contrastive learning with prototypical domain alignment for unsupervised domain adaptation of time series

Unsupervised domain adaptation (UDA) addresses the challenge of transferring knowledge from a labeled source domain to an unlabeled target domain. This task is particularly critical for time series data, characterized by unique temporal dynamics. However, existing methods often fail to capture these temporal dependencies, leading to domain discrepancies and loss of semantic information. In this study, we propose a novel framework for the unsupervised domain adaptation of time series (UDATS) that integrates Multimodal Contrastive Adaptation (MCA) and Prototypical Domain Alignment (PDA). MCA leverages image encoding techniques and prompt learning to capture complex temporal patterns while preserving semantic information. PDA constructs multimodal prototypes, combining visual and textual features to align target domain samples accurately. Our framework demonstrates superior performance across various application domains, including human activity recognition, mortality prediction, and fault detection. Experiments show our method effectively addresses domain discrepancies while preserving essential semantic content, outperforming state-of-the-art models.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Engineering Applications of Artificial Intelligence
Engineering Applications of Artificial Intelligence 工程技术-工程:电子与电气
CiteScore
9.60
自引率
10.00%
发文量
505
审稿时长
68 days
期刊介绍: Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.
期刊最新文献
Solving the imbalanced dataset problem in surveillance image blur classification An interpretable precursor-driven hierarchical model for predictive aircraft safety Predictive resilience assessment featuring diffusion reconstruction for road networks under rainfall disturbances A novel solution for routing a swarm of drones operated on a mobile host Correlation mining of multimodal features based on higher-order partial least squares for emotion recognition in conversations
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1