基于并行增量支持向量机的高维图像分类算法

Thanh-Nghi Doan, Thanh-Nghi Do, F. Poulet
{"title":"基于并行增量支持向量机的高维图像分类算法","authors":"Thanh-Nghi Doan, Thanh-Nghi Do, F. Poulet","doi":"10.1109/IJCNN.2013.6707121","DOIUrl":null,"url":null,"abstract":"ImageNet dataset [1] with more than 14M images and 21K classes makes the problem of visual classification more difficult to deal with. One of the most difficult tasks is to train a fast and accurate classifier on computers with limited memory resource. In this paper, we address this challenge by extending the state-of-the-art large scale classifier Power Mean SVM (PmSVM) proposed by Jianxin Wu [2] in three ways: (1) An incremental learning for PmSVM, (2) A balanced bagging algorithm for training binary classifiers, (3) Parallelize the training process of classifiers with several multi-core computers. Our approach is evaluated on 1K classes of ImageNet (ILSVRC 1000 [3]). The evaluation shows that our approach can save up to 84.34% memory usage and the training process is 297 times faster than the original implementation and 1508 times faster than the state-of-the-art linear classifier (LIBLINEAR [4]).","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Parallel incremental SVM for classifying million images with very high-dimensional signatures into thousand classes\",\"authors\":\"Thanh-Nghi Doan, Thanh-Nghi Do, F. Poulet\",\"doi\":\"10.1109/IJCNN.2013.6707121\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ImageNet dataset [1] with more than 14M images and 21K classes makes the problem of visual classification more difficult to deal with. One of the most difficult tasks is to train a fast and accurate classifier on computers with limited memory resource. In this paper, we address this challenge by extending the state-of-the-art large scale classifier Power Mean SVM (PmSVM) proposed by Jianxin Wu [2] in three ways: (1) An incremental learning for PmSVM, (2) A balanced bagging algorithm for training binary classifiers, (3) Parallelize the training process of classifiers with several multi-core computers. Our approach is evaluated on 1K classes of ImageNet (ILSVRC 1000 [3]). The evaluation shows that our approach can save up to 84.34% memory usage and the training process is 297 times faster than the original implementation and 1508 times faster than the state-of-the-art linear classifier (LIBLINEAR [4]).\",\"PeriodicalId\":376975,\"journal\":{\"name\":\"The 2013 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"61 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-08-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The 2013 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.2013.6707121\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2013 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2013.6707121","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

ImageNet数据集[1]拥有超过14M张图像和21K个类,使得视觉分类问题更加难以处理。在内存有限的计算机上训练快速准确的分类器是最困难的任务之一。在本文中,我们通过三种方式扩展了由Jianxin Wu[2]提出的最先进的大规模分类器Power Mean SVM (PmSVM)来解决这一挑战:(1)PmSVM的增量学习,(2)训练二分类器的平衡bagging算法,(3)在多核计算机上并行化分类器的训练过程。我们的方法在1K个ImageNet类(ILSVRC 1000[3])上进行了评估。评估表明,我们的方法可以节省高达84.34%的内存使用,训练过程比原始实现快297倍,比最先进的线性分类器(LIBLINEAR[4])快1508倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Parallel incremental SVM for classifying million images with very high-dimensional signatures into thousand classes
ImageNet dataset [1] with more than 14M images and 21K classes makes the problem of visual classification more difficult to deal with. One of the most difficult tasks is to train a fast and accurate classifier on computers with limited memory resource. In this paper, we address this challenge by extending the state-of-the-art large scale classifier Power Mean SVM (PmSVM) proposed by Jianxin Wu [2] in three ways: (1) An incremental learning for PmSVM, (2) A balanced bagging algorithm for training binary classifiers, (3) Parallelize the training process of classifiers with several multi-core computers. Our approach is evaluated on 1K classes of ImageNet (ILSVRC 1000 [3]). The evaluation shows that our approach can save up to 84.34% memory usage and the training process is 297 times faster than the original implementation and 1508 times faster than the state-of-the-art linear classifier (LIBLINEAR [4]).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
An SVM-based approach for stock market trend prediction Spiking neural networks for financial data prediction Improving multi-label classification performance by label constraints Biologically inspired intensity and range image feature extraction A location-independent direct link neuromorphic interface
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1