首页 > 最新文献

2022 14th International Conference on Advanced Computational Intelligence (ICACI)最新文献

英文 中文
Improved YOLO v5 for Railway PCCS Tiny Defect Detection 改进的YOLO v5用于铁路PCCS微小缺陷检测
Pub Date : 2022-07-15 DOI: 10.1109/icaci55529.2022.9837504
T. Zhao, Xiukun Wei, Xuewu Yang
Pantograph defect of rolling stocks is directly related to its operation safety, so timely detection of its health status is one of the most important tasks in rolling stocks maintenance. In order to achieve rapid and accurate detection of PCCS (Pantograph Carbon Contact Strip) tiny defect, this paper puts forward an improved YOLO v5 model, in which Focal Loss function is applied. Besides, four-head structure is designed to retain more shallow features and the original PANet is replaced with BiFPN to achieve cross-scale feature fusion. After that, comparative experiments are conducted on self-made dataset. The results shows that our method improves the detection accuracy of tiny targets and reduces the false positive rate. The mAP@0.5 reaches 99.9% and Recall is 95.4%, while FPS reaches 196, which means our model can fully meet the requirement of real-time precise tiny detect detection.
机车车辆受电弓缺陷直接关系到其运行安全,及时检测其健康状态是机车车辆维修中最重要的任务之一。为了实现对PCCS(受电弓碳接触带)微小缺陷的快速准确检测,本文提出了一种改进的YOLO v5模型,该模型采用焦损失函数。此外,设计了四头结构,保留了更多的浅层特征,并用BiFPN代替原有的PANet,实现了跨尺度特征融合。然后在自制数据集上进行对比实验。结果表明,该方法提高了微小目标的检测精度,降低了误报率。mAP@0.5达到99.9%,Recall达到95.4%,FPS达到196,完全可以满足实时精确微小检测的要求。
{"title":"Improved YOLO v5 for Railway PCCS Tiny Defect Detection","authors":"T. Zhao, Xiukun Wei, Xuewu Yang","doi":"10.1109/icaci55529.2022.9837504","DOIUrl":"https://doi.org/10.1109/icaci55529.2022.9837504","url":null,"abstract":"Pantograph defect of rolling stocks is directly related to its operation safety, so timely detection of its health status is one of the most important tasks in rolling stocks maintenance. In order to achieve rapid and accurate detection of PCCS (Pantograph Carbon Contact Strip) tiny defect, this paper puts forward an improved YOLO v5 model, in which Focal Loss function is applied. Besides, four-head structure is designed to retain more shallow features and the original PANet is replaced with BiFPN to achieve cross-scale feature fusion. After that, comparative experiments are conducted on self-made dataset. The results shows that our method improves the detection accuracy of tiny targets and reduces the false positive rate. The mAP@0.5 reaches 99.9% and Recall is 95.4%, while FPS reaches 196, which means our model can fully meet the requirement of real-time precise tiny detect detection.","PeriodicalId":412347,"journal":{"name":"2022 14th International Conference on Advanced Computational Intelligence (ICACI)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132459874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi-Modal Fusion Transformer for Multivariate Time Series Classification 多元时间序列分类的多模态融合变压器
Pub Date : 2022-07-15 DOI: 10.1109/icaci55529.2022.9837525
Hao-Yue Jiang, Lianguang Liu, Cheng Lian
With the development of sensor technology, multi-variate time series classification is an essential element in time data mining. Multivariate time series are everywhere in our daily lives, like finance, the weather, and the healthcare system. In the meantime, Transformers has achieved excellent results in terms of NLP and CV tasks. The Vision Transformer (ViT) achieves excellent results compared to SOTA’s convolutional networks when pre-training large amounts of data and transferring it to multiple small to medium image recognition baselines while significantly reducing the required computing resources. At the same time, multi-modality can extract more excellent features, and related research has also developed significantly. In this work, we propose a multi-modal fusion transformer for time series classification. We use Gramian Angular Field (GAF) to convert time series to 2D images and then use CNN to extract features from 1D time series and 2D images separately to fuse them. Finally, the information output from the transformer encoder fuse is entered in ResNet for classification. We conduct extensive experiments on twelve time series datasets. Compared to several baselines, our model has obtained higher accuracy.
随着传感器技术的发展,多变量时间序列分类是时间数据挖掘的重要组成部分。多元时间序列在我们的日常生活中无处不在,比如金融、天气和医疗保健系统。同时,变形金刚在NLP和CV任务方面都取得了优异的成绩。Vision Transformer (ViT)在预训练大量数据并将其传输到多个中小型图像识别基线时,与SOTA的卷积网络相比,取得了出色的效果,同时显着减少了所需的计算资源。同时,多模态可以提取更多优秀的特征,相关研究也有了长足的发展。在这项工作中,我们提出了一种用于时间序列分类的多模态融合变压器。我们先使用graian Angular Field (GAF)将时间序列转换为二维图像,然后使用CNN分别从一维时间序列和二维图像中提取特征进行融合。最后,将变压器编码器保险丝输出的信息输入ResNet进行分类。我们在12个时间序列数据集上进行了广泛的实验。与几种基线相比,我们的模型获得了更高的精度。
{"title":"Multi-Modal Fusion Transformer for Multivariate Time Series Classification","authors":"Hao-Yue Jiang, Lianguang Liu, Cheng Lian","doi":"10.1109/icaci55529.2022.9837525","DOIUrl":"https://doi.org/10.1109/icaci55529.2022.9837525","url":null,"abstract":"With the development of sensor technology, multi-variate time series classification is an essential element in time data mining. Multivariate time series are everywhere in our daily lives, like finance, the weather, and the healthcare system. In the meantime, Transformers has achieved excellent results in terms of NLP and CV tasks. The Vision Transformer (ViT) achieves excellent results compared to SOTA’s convolutional networks when pre-training large amounts of data and transferring it to multiple small to medium image recognition baselines while significantly reducing the required computing resources. At the same time, multi-modality can extract more excellent features, and related research has also developed significantly. In this work, we propose a multi-modal fusion transformer for time series classification. We use Gramian Angular Field (GAF) to convert time series to 2D images and then use CNN to extract features from 1D time series and 2D images separately to fuse them. Finally, the information output from the transformer encoder fuse is entered in ResNet for classification. We conduct extensive experiments on twelve time series datasets. Compared to several baselines, our model has obtained higher accuracy.","PeriodicalId":412347,"journal":{"name":"2022 14th International Conference on Advanced Computational Intelligence (ICACI)","volume":"141 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120833066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Base on Megapixel Color Fundus Photos for Multi-label Disease Classification 基于百万像素彩色眼底照片的多标签疾病分类
Pub Date : 2022-07-15 DOI: 10.1109/icaci55529.2022.9837676
Honggang Yang, Jiejie Chen, Rong Luan, Mengfei Xu, Lin Ma, Xiaoqi Zhou
This paper discusses a new challenge of artificial intelligence in predicting fundus diseases: using only unprocessed million pixel Color Fundus Photos(CFP) to complete multi-label multi classification and lesion location tasks at the same time. In order to solve this problem, Double Flow Multi Instance Neural Network(DF-MINN) is designed. Df-MINN is an end-to-end dual flow network. It uses Multi Instance Spatial Attention(MISA) module to extract local information and Global Priorities Network base on Involvement(GPNI) module to analyze the overall content. In addition, experiments on the open multi label fundus dataset OIA-ODIR showed that DF-MINN was higher average precision than the previous network in the prediction of all seven diseases. Ablation experiments further proved the importance of high-resolution images in the diagnosis of fundus diseases.
本文讨论了人工智能在眼底疾病预测中的新挑战:仅使用未经处理的百万像素彩色眼底照片(CFP)即可同时完成多标签多分类和病灶定位任务。为了解决这一问题,设计了双流多实例神经网络(DF-MINN)。Df-MINN是端到端双流网络。利用多实例空间注意(MISA)模块提取局部信息,利用基于介入的全局优先网络(GPNI)模块分析整体内容。此外,在开放的多标签眼底数据集OIA-ODIR上进行的实验表明,DF-MINN对所有7种疾病的预测平均精度都高于之前的网络。消融实验进一步证明了高分辨率图像在眼底疾病诊断中的重要性。
{"title":"Base on Megapixel Color Fundus Photos for Multi-label Disease Classification","authors":"Honggang Yang, Jiejie Chen, Rong Luan, Mengfei Xu, Lin Ma, Xiaoqi Zhou","doi":"10.1109/icaci55529.2022.9837676","DOIUrl":"https://doi.org/10.1109/icaci55529.2022.9837676","url":null,"abstract":"This paper discusses a new challenge of artificial intelligence in predicting fundus diseases: using only unprocessed million pixel Color Fundus Photos(CFP) to complete multi-label multi classification and lesion location tasks at the same time. In order to solve this problem, Double Flow Multi Instance Neural Network(DF-MINN) is designed. Df-MINN is an end-to-end dual flow network. It uses Multi Instance Spatial Attention(MISA) module to extract local information and Global Priorities Network base on Involvement(GPNI) module to analyze the overall content. In addition, experiments on the open multi label fundus dataset OIA-ODIR showed that DF-MINN was higher average precision than the previous network in the prediction of all seven diseases. Ablation experiments further proved the importance of high-resolution images in the diagnosis of fundus diseases.","PeriodicalId":412347,"journal":{"name":"2022 14th International Conference on Advanced Computational Intelligence (ICACI)","volume":"EM-34 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121006180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Winning Solution to the iFLYTEK Challenge 2021 Cultivated Land Extraction from High-Resolution Remote Sensing Images 科大讯飞挑战赛2021从高分辨率遥感图像中提取耕地
Pub Date : 2022-02-22 DOI: 10.1109/ICACI55529.2022.9837765
Z. Zhao, Yuqiu Liu, Gang Zhang, Liang Tang, Xiao-Ning Hu
Extracting cultivated land accurately from high-resolution remote images is a basic task for precision agriculture. This paper introduces our solution to iFLYTEK challenge 2021 cultivated land extraction from high-resolution remote sensing images. We established a highly effective and efficient pipeline to solve this problem. We first divided the original images into small tiles and separately performed instance segmentation on each tile. We explored several instance segmentation algorithms that work well on natural images and developed a set of effective methods that are applicable to remote sensing images. Then we merged the prediction results of all small tiles into seamless, continuous segmentation results through our proposed overlap-tile fusion strategy. We achieved first place among 486 teams in the challenge.
从高分辨率遥感影像中准确提取耕地是精准农业的一项基本任务。本文介绍了科大讯飞挑战2021高分辨率遥感影像耕地提取的解决方案。我们建立了一个高效的管道来解决这个问题。我们首先将原始图像分割成小块,并对每个小块分别进行实例分割。我们探索了几种适用于自然图像的实例分割算法,并开发了一套适用于遥感图像的有效方法。然后通过我们提出的重叠块融合策略,将所有小块的预测结果合并为无缝连续的分割结果。我们在486支队伍中获得了第一名。
{"title":"The Winning Solution to the iFLYTEK Challenge 2021 Cultivated Land Extraction from High-Resolution Remote Sensing Images","authors":"Z. Zhao, Yuqiu Liu, Gang Zhang, Liang Tang, Xiao-Ning Hu","doi":"10.1109/ICACI55529.2022.9837765","DOIUrl":"https://doi.org/10.1109/ICACI55529.2022.9837765","url":null,"abstract":"Extracting cultivated land accurately from high-resolution remote images is a basic task for precision agriculture. This paper introduces our solution to iFLYTEK challenge 2021 cultivated land extraction from high-resolution remote sensing images. We established a highly effective and efficient pipeline to solve this problem. We first divided the original images into small tiles and separately performed instance segmentation on each tile. We explored several instance segmentation algorithms that work well on natural images and developed a set of effective methods that are applicable to remote sensing images. Then we merged the prediction results of all small tiles into seamless, continuous segmentation results through our proposed overlap-tile fusion strategy. We achieved first place among 486 teams in the challenge.","PeriodicalId":412347,"journal":{"name":"2022 14th International Conference on Advanced Computational Intelligence (ICACI)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131501533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2022 14th International Conference on Advanced Computational Intelligence (ICACI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1