Pub Date : 2022-07-15DOI: 10.1109/icaci55529.2022.9837504
T. Zhao, Xiukun Wei, Xuewu Yang
Pantograph defect of rolling stocks is directly related to its operation safety, so timely detection of its health status is one of the most important tasks in rolling stocks maintenance. In order to achieve rapid and accurate detection of PCCS (Pantograph Carbon Contact Strip) tiny defect, this paper puts forward an improved YOLO v5 model, in which Focal Loss function is applied. Besides, four-head structure is designed to retain more shallow features and the original PANet is replaced with BiFPN to achieve cross-scale feature fusion. After that, comparative experiments are conducted on self-made dataset. The results shows that our method improves the detection accuracy of tiny targets and reduces the false positive rate. The mAP@0.5 reaches 99.9% and Recall is 95.4%, while FPS reaches 196, which means our model can fully meet the requirement of real-time precise tiny detect detection.
{"title":"Improved YOLO v5 for Railway PCCS Tiny Defect Detection","authors":"T. Zhao, Xiukun Wei, Xuewu Yang","doi":"10.1109/icaci55529.2022.9837504","DOIUrl":"https://doi.org/10.1109/icaci55529.2022.9837504","url":null,"abstract":"Pantograph defect of rolling stocks is directly related to its operation safety, so timely detection of its health status is one of the most important tasks in rolling stocks maintenance. In order to achieve rapid and accurate detection of PCCS (Pantograph Carbon Contact Strip) tiny defect, this paper puts forward an improved YOLO v5 model, in which Focal Loss function is applied. Besides, four-head structure is designed to retain more shallow features and the original PANet is replaced with BiFPN to achieve cross-scale feature fusion. After that, comparative experiments are conducted on self-made dataset. The results shows that our method improves the detection accuracy of tiny targets and reduces the false positive rate. The mAP@0.5 reaches 99.9% and Recall is 95.4%, while FPS reaches 196, which means our model can fully meet the requirement of real-time precise tiny detect detection.","PeriodicalId":412347,"journal":{"name":"2022 14th International Conference on Advanced Computational Intelligence (ICACI)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132459874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-15DOI: 10.1109/icaci55529.2022.9837525
Hao-Yue Jiang, Lianguang Liu, Cheng Lian
With the development of sensor technology, multi-variate time series classification is an essential element in time data mining. Multivariate time series are everywhere in our daily lives, like finance, the weather, and the healthcare system. In the meantime, Transformers has achieved excellent results in terms of NLP and CV tasks. The Vision Transformer (ViT) achieves excellent results compared to SOTA’s convolutional networks when pre-training large amounts of data and transferring it to multiple small to medium image recognition baselines while significantly reducing the required computing resources. At the same time, multi-modality can extract more excellent features, and related research has also developed significantly. In this work, we propose a multi-modal fusion transformer for time series classification. We use Gramian Angular Field (GAF) to convert time series to 2D images and then use CNN to extract features from 1D time series and 2D images separately to fuse them. Finally, the information output from the transformer encoder fuse is entered in ResNet for classification. We conduct extensive experiments on twelve time series datasets. Compared to several baselines, our model has obtained higher accuracy.
随着传感器技术的发展,多变量时间序列分类是时间数据挖掘的重要组成部分。多元时间序列在我们的日常生活中无处不在,比如金融、天气和医疗保健系统。同时,变形金刚在NLP和CV任务方面都取得了优异的成绩。Vision Transformer (ViT)在预训练大量数据并将其传输到多个中小型图像识别基线时,与SOTA的卷积网络相比,取得了出色的效果,同时显着减少了所需的计算资源。同时,多模态可以提取更多优秀的特征,相关研究也有了长足的发展。在这项工作中,我们提出了一种用于时间序列分类的多模态融合变压器。我们先使用graian Angular Field (GAF)将时间序列转换为二维图像,然后使用CNN分别从一维时间序列和二维图像中提取特征进行融合。最后,将变压器编码器保险丝输出的信息输入ResNet进行分类。我们在12个时间序列数据集上进行了广泛的实验。与几种基线相比,我们的模型获得了更高的精度。
{"title":"Multi-Modal Fusion Transformer for Multivariate Time Series Classification","authors":"Hao-Yue Jiang, Lianguang Liu, Cheng Lian","doi":"10.1109/icaci55529.2022.9837525","DOIUrl":"https://doi.org/10.1109/icaci55529.2022.9837525","url":null,"abstract":"With the development of sensor technology, multi-variate time series classification is an essential element in time data mining. Multivariate time series are everywhere in our daily lives, like finance, the weather, and the healthcare system. In the meantime, Transformers has achieved excellent results in terms of NLP and CV tasks. The Vision Transformer (ViT) achieves excellent results compared to SOTA’s convolutional networks when pre-training large amounts of data and transferring it to multiple small to medium image recognition baselines while significantly reducing the required computing resources. At the same time, multi-modality can extract more excellent features, and related research has also developed significantly. In this work, we propose a multi-modal fusion transformer for time series classification. We use Gramian Angular Field (GAF) to convert time series to 2D images and then use CNN to extract features from 1D time series and 2D images separately to fuse them. Finally, the information output from the transformer encoder fuse is entered in ResNet for classification. We conduct extensive experiments on twelve time series datasets. Compared to several baselines, our model has obtained higher accuracy.","PeriodicalId":412347,"journal":{"name":"2022 14th International Conference on Advanced Computational Intelligence (ICACI)","volume":"141 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120833066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-15DOI: 10.1109/icaci55529.2022.9837676
Honggang Yang, Jiejie Chen, Rong Luan, Mengfei Xu, Lin Ma, Xiaoqi Zhou
This paper discusses a new challenge of artificial intelligence in predicting fundus diseases: using only unprocessed million pixel Color Fundus Photos(CFP) to complete multi-label multi classification and lesion location tasks at the same time. In order to solve this problem, Double Flow Multi Instance Neural Network(DF-MINN) is designed. Df-MINN is an end-to-end dual flow network. It uses Multi Instance Spatial Attention(MISA) module to extract local information and Global Priorities Network base on Involvement(GPNI) module to analyze the overall content. In addition, experiments on the open multi label fundus dataset OIA-ODIR showed that DF-MINN was higher average precision than the previous network in the prediction of all seven diseases. Ablation experiments further proved the importance of high-resolution images in the diagnosis of fundus diseases.
{"title":"Base on Megapixel Color Fundus Photos for Multi-label Disease Classification","authors":"Honggang Yang, Jiejie Chen, Rong Luan, Mengfei Xu, Lin Ma, Xiaoqi Zhou","doi":"10.1109/icaci55529.2022.9837676","DOIUrl":"https://doi.org/10.1109/icaci55529.2022.9837676","url":null,"abstract":"This paper discusses a new challenge of artificial intelligence in predicting fundus diseases: using only unprocessed million pixel Color Fundus Photos(CFP) to complete multi-label multi classification and lesion location tasks at the same time. In order to solve this problem, Double Flow Multi Instance Neural Network(DF-MINN) is designed. Df-MINN is an end-to-end dual flow network. It uses Multi Instance Spatial Attention(MISA) module to extract local information and Global Priorities Network base on Involvement(GPNI) module to analyze the overall content. In addition, experiments on the open multi label fundus dataset OIA-ODIR showed that DF-MINN was higher average precision than the previous network in the prediction of all seven diseases. Ablation experiments further proved the importance of high-resolution images in the diagnosis of fundus diseases.","PeriodicalId":412347,"journal":{"name":"2022 14th International Conference on Advanced Computational Intelligence (ICACI)","volume":"EM-34 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121006180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-22DOI: 10.1109/ICACI55529.2022.9837765
Z. Zhao, Yuqiu Liu, Gang Zhang, Liang Tang, Xiao-Ning Hu
Extracting cultivated land accurately from high-resolution remote images is a basic task for precision agriculture. This paper introduces our solution to iFLYTEK challenge 2021 cultivated land extraction from high-resolution remote sensing images. We established a highly effective and efficient pipeline to solve this problem. We first divided the original images into small tiles and separately performed instance segmentation on each tile. We explored several instance segmentation algorithms that work well on natural images and developed a set of effective methods that are applicable to remote sensing images. Then we merged the prediction results of all small tiles into seamless, continuous segmentation results through our proposed overlap-tile fusion strategy. We achieved first place among 486 teams in the challenge.
{"title":"The Winning Solution to the iFLYTEK Challenge 2021 Cultivated Land Extraction from High-Resolution Remote Sensing Images","authors":"Z. Zhao, Yuqiu Liu, Gang Zhang, Liang Tang, Xiao-Ning Hu","doi":"10.1109/ICACI55529.2022.9837765","DOIUrl":"https://doi.org/10.1109/ICACI55529.2022.9837765","url":null,"abstract":"Extracting cultivated land accurately from high-resolution remote images is a basic task for precision agriculture. This paper introduces our solution to iFLYTEK challenge 2021 cultivated land extraction from high-resolution remote sensing images. We established a highly effective and efficient pipeline to solve this problem. We first divided the original images into small tiles and separately performed instance segmentation on each tile. We explored several instance segmentation algorithms that work well on natural images and developed a set of effective methods that are applicable to remote sensing images. Then we merged the prediction results of all small tiles into seamless, continuous segmentation results through our proposed overlap-tile fusion strategy. We achieved first place among 486 teams in the challenge.","PeriodicalId":412347,"journal":{"name":"2022 14th International Conference on Advanced Computational Intelligence (ICACI)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131501533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}