{"title":"多元时间序列分类的多模态融合变压器","authors":"Hao-Yue Jiang, Lianguang Liu, Cheng Lian","doi":"10.1109/icaci55529.2022.9837525","DOIUrl":null,"url":null,"abstract":"With the development of sensor technology, multi-variate time series classification is an essential element in time data mining. Multivariate time series are everywhere in our daily lives, like finance, the weather, and the healthcare system. In the meantime, Transformers has achieved excellent results in terms of NLP and CV tasks. The Vision Transformer (ViT) achieves excellent results compared to SOTA’s convolutional networks when pre-training large amounts of data and transferring it to multiple small to medium image recognition baselines while significantly reducing the required computing resources. At the same time, multi-modality can extract more excellent features, and related research has also developed significantly. In this work, we propose a multi-modal fusion transformer for time series classification. We use Gramian Angular Field (GAF) to convert time series to 2D images and then use CNN to extract features from 1D time series and 2D images separately to fuse them. Finally, the information output from the transformer encoder fuse is entered in ResNet for classification. We conduct extensive experiments on twelve time series datasets. Compared to several baselines, our model has obtained higher accuracy.","PeriodicalId":412347,"journal":{"name":"2022 14th International Conference on Advanced Computational Intelligence (ICACI)","volume":"141 2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Multi-Modal Fusion Transformer for Multivariate Time Series Classification\",\"authors\":\"Hao-Yue Jiang, Lianguang Liu, Cheng Lian\",\"doi\":\"10.1109/icaci55529.2022.9837525\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the development of sensor technology, multi-variate time series classification is an essential element in time data mining. Multivariate time series are everywhere in our daily lives, like finance, the weather, and the healthcare system. In the meantime, Transformers has achieved excellent results in terms of NLP and CV tasks. The Vision Transformer (ViT) achieves excellent results compared to SOTA’s convolutional networks when pre-training large amounts of data and transferring it to multiple small to medium image recognition baselines while significantly reducing the required computing resources. At the same time, multi-modality can extract more excellent features, and related research has also developed significantly. In this work, we propose a multi-modal fusion transformer for time series classification. We use Gramian Angular Field (GAF) to convert time series to 2D images and then use CNN to extract features from 1D time series and 2D images separately to fuse them. Finally, the information output from the transformer encoder fuse is entered in ResNet for classification. We conduct extensive experiments on twelve time series datasets. Compared to several baselines, our model has obtained higher accuracy.\",\"PeriodicalId\":412347,\"journal\":{\"name\":\"2022 14th International Conference on Advanced Computational Intelligence (ICACI)\",\"volume\":\"141 2\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 14th International Conference on Advanced Computational Intelligence (ICACI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/icaci55529.2022.9837525\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 14th International Conference on Advanced Computational Intelligence (ICACI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icaci55529.2022.9837525","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
摘要
随着传感器技术的发展,多变量时间序列分类是时间数据挖掘的重要组成部分。多元时间序列在我们的日常生活中无处不在,比如金融、天气和医疗保健系统。同时,变形金刚在NLP和CV任务方面都取得了优异的成绩。Vision Transformer (ViT)在预训练大量数据并将其传输到多个中小型图像识别基线时,与SOTA的卷积网络相比,取得了出色的效果,同时显着减少了所需的计算资源。同时,多模态可以提取更多优秀的特征,相关研究也有了长足的发展。在这项工作中,我们提出了一种用于时间序列分类的多模态融合变压器。我们先使用graian Angular Field (GAF)将时间序列转换为二维图像,然后使用CNN分别从一维时间序列和二维图像中提取特征进行融合。最后,将变压器编码器保险丝输出的信息输入ResNet进行分类。我们在12个时间序列数据集上进行了广泛的实验。与几种基线相比,我们的模型获得了更高的精度。
Multi-Modal Fusion Transformer for Multivariate Time Series Classification
With the development of sensor technology, multi-variate time series classification is an essential element in time data mining. Multivariate time series are everywhere in our daily lives, like finance, the weather, and the healthcare system. In the meantime, Transformers has achieved excellent results in terms of NLP and CV tasks. The Vision Transformer (ViT) achieves excellent results compared to SOTA’s convolutional networks when pre-training large amounts of data and transferring it to multiple small to medium image recognition baselines while significantly reducing the required computing resources. At the same time, multi-modality can extract more excellent features, and related research has also developed significantly. In this work, we propose a multi-modal fusion transformer for time series classification. We use Gramian Angular Field (GAF) to convert time series to 2D images and then use CNN to extract features from 1D time series and 2D images separately to fuse them. Finally, the information output from the transformer encoder fuse is entered in ResNet for classification. We conduct extensive experiments on twelve time series datasets. Compared to several baselines, our model has obtained higher accuracy.