DetachMix:颜色通道融合训练目标检测神经网络

Boyu Zhao, Zhicheng Dong, Jie Li, Bin Zhao, Pengfei Li
{"title":"DetachMix:颜色通道融合训练目标检测神经网络","authors":"Boyu Zhao, Zhicheng Dong, Jie Li, Bin Zhao, Pengfei Li","doi":"10.1109/ICESIT53460.2021.9696486","DOIUrl":null,"url":null,"abstract":"Pre-training strategies greatly improve various image classification model's accuracy. They have proved to be effective for guiding the model to attend on objects in complex samples, thereby letting the network perform better without adding extra inference cost. However, the training strategies and pipelines dramatically vary among different models, and they only change the complexity of the samples, but do not really combine the targets with the training process of the network. We therefore propose the DetachMix augmentation strategy, and it is divided into two steps: the first step is to segment the picture according to the color channels and train them separately on the network.The second step is to merge the first convolutional layer of the obtained weight files to replace the one obtained during normal training. The network model gained through DetachMix can not only combine images with network training, but also can be used in combination with other methods of processing samples (such as cutmix [1], Mixup [2], etc.). The method we proposed improves network performance by improving the network's ability to extract original semantic information from picture, and it is applicable to all models with the same reasoning time. We conducted a test on the collected Person Head State dataset and compared our method with the latest data processing methods. DetachMix can improve up to at most 8.3% precision compared to state-of-the-art baselines,reached 91.43 %.","PeriodicalId":164745,"journal":{"name":"2021 IEEE International Conference on Emergency Science and Information Technology (ICESIT)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DetachMix: Color Channels Fusion for Training Object Detection Neural Networks\",\"authors\":\"Boyu Zhao, Zhicheng Dong, Jie Li, Bin Zhao, Pengfei Li\",\"doi\":\"10.1109/ICESIT53460.2021.9696486\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Pre-training strategies greatly improve various image classification model's accuracy. They have proved to be effective for guiding the model to attend on objects in complex samples, thereby letting the network perform better without adding extra inference cost. However, the training strategies and pipelines dramatically vary among different models, and they only change the complexity of the samples, but do not really combine the targets with the training process of the network. We therefore propose the DetachMix augmentation strategy, and it is divided into two steps: the first step is to segment the picture according to the color channels and train them separately on the network.The second step is to merge the first convolutional layer of the obtained weight files to replace the one obtained during normal training. The network model gained through DetachMix can not only combine images with network training, but also can be used in combination with other methods of processing samples (such as cutmix [1], Mixup [2], etc.). The method we proposed improves network performance by improving the network's ability to extract original semantic information from picture, and it is applicable to all models with the same reasoning time. We conducted a test on the collected Person Head State dataset and compared our method with the latest data processing methods. DetachMix can improve up to at most 8.3% precision compared to state-of-the-art baselines,reached 91.43 %.\",\"PeriodicalId\":164745,\"journal\":{\"name\":\"2021 IEEE International Conference on Emergency Science and Information Technology (ICESIT)\",\"volume\":\"81 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Emergency Science and Information Technology (ICESIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICESIT53460.2021.9696486\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Emergency Science and Information Technology (ICESIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICESIT53460.2021.9696486","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

预训练策略极大地提高了各种图像分类模型的准确率。事实证明,它们可以有效地引导模型关注复杂样本中的对象,从而在不增加额外推理成本的情况下让网络表现得更好。然而,不同模型的训练策略和训练管道差异很大,它们只是改变了样本的复杂度,并没有真正将目标与网络的训练过程结合起来。为此,我们提出了DetachMix增强策略,并将其分为两步:第一步是根据颜色通道对图像进行分割,并在网络上分别进行训练。第二步是合并得到的权重文件的第一个卷积层,替换在正常训练中得到的卷积层。DetachMix得到的网络模型不仅可以将图像与网络训练相结合,还可以与其他处理样本的方法(如cutmix[1]、Mixup[2]等)结合使用。我们提出的方法通过提高网络从图片中提取原始语义信息的能力来提高网络性能,并且该方法适用于具有相同推理时间的所有模型。我们对收集到的人的头部状态数据集进行了测试,并将我们的方法与最新的数据处理方法进行了比较。与最先进的基线相比,DetachMix的精度最高可提高8.3%,达到91.43%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
DetachMix: Color Channels Fusion for Training Object Detection Neural Networks
Pre-training strategies greatly improve various image classification model's accuracy. They have proved to be effective for guiding the model to attend on objects in complex samples, thereby letting the network perform better without adding extra inference cost. However, the training strategies and pipelines dramatically vary among different models, and they only change the complexity of the samples, but do not really combine the targets with the training process of the network. We therefore propose the DetachMix augmentation strategy, and it is divided into two steps: the first step is to segment the picture according to the color channels and train them separately on the network.The second step is to merge the first convolutional layer of the obtained weight files to replace the one obtained during normal training. The network model gained through DetachMix can not only combine images with network training, but also can be used in combination with other methods of processing samples (such as cutmix [1], Mixup [2], etc.). The method we proposed improves network performance by improving the network's ability to extract original semantic information from picture, and it is applicable to all models with the same reasoning time. We conducted a test on the collected Person Head State dataset and compared our method with the latest data processing methods. DetachMix can improve up to at most 8.3% precision compared to state-of-the-art baselines,reached 91.43 %.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Deformation monitoring of highway goaf based on three-dimensional laser scanning Mathematical Comprehensive Evaluation Model of Support Capability of a Missile Equipment Supported by Hierarchy-Fuzzy-Grey Correlation Computer Recognition of Species Using Intelligent UAV Multispectral Imagery Research on System Modeling Simulation and Application Technology Based on Electromechanical Equipment Price Prediction of Used Cars Using Machine Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1