Object classification with deep convolutional neural network using spatial information

Ryusei Shima, He Yunan, O. Fukuda, H. Okumura, K. Arai, N. Bu
{"title":"Object classification with deep convolutional neural network using spatial information","authors":"Ryusei Shima, He Yunan, O. Fukuda, H. Okumura, K. Arai, N. Bu","doi":"10.1109/ICIIBMS.2017.8279704","DOIUrl":null,"url":null,"abstract":"This paper proposes a prosthetic control method which incorporates a novel object classifier with a conventional EMG-based motion classifier. The proposed method uses not only color information but spatial information to reduce the misclassification in previous research. The depth images are created based on spatial information which is acquired by Kinect. The deep convolutional neural network is adopted for the object classification, and the posture of the prosthetic hand is controlled based on the classification result of the object. To verify the validity of the proposed control method, the experiments have been carried out with 6 target objects. The 300 images for each target object were acquired in various directions. Their shapes resemble each other in particular perspective. We trained the deep convolutional neural network using the hybrid images which involve gray scale and depth information. In the experiments, the depth information improved the learning performance with high classification accuracy. These results revealed that the proposed method has high potential to improve object classification ability.","PeriodicalId":122969,"journal":{"name":"2017 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIIBMS.2017.8279704","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

This paper proposes a prosthetic control method which incorporates a novel object classifier with a conventional EMG-based motion classifier. The proposed method uses not only color information but spatial information to reduce the misclassification in previous research. The depth images are created based on spatial information which is acquired by Kinect. The deep convolutional neural network is adopted for the object classification, and the posture of the prosthetic hand is controlled based on the classification result of the object. To verify the validity of the proposed control method, the experiments have been carried out with 6 target objects. The 300 images for each target object were acquired in various directions. Their shapes resemble each other in particular perspective. We trained the deep convolutional neural network using the hybrid images which involve gray scale and depth information. In the experiments, the depth information improved the learning performance with high classification accuracy. These results revealed that the proposed method has high potential to improve object classification ability.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于空间信息的深度卷积神经网络目标分类
本文提出了一种将新的目标分类器与传统的基于肌电图的运动分类器相结合的假肢控制方法。该方法不仅利用颜色信息,而且利用空间信息来减少以往研究中的错误分类。深度图像是基于Kinect获取的空间信息创建的。采用深度卷积神经网络进行对象分类,根据对象分类结果控制假手的姿态。为了验证所提出的控制方法的有效性,对6个目标对象进行了实验。每个目标物体在不同方向上获得300张图像。它们的形状在特定的角度上彼此相似。我们使用包含灰度和深度信息的混合图像来训练深度卷积神经网络。在实验中,深度信息提高了学习性能,具有较高的分类准确率。这些结果表明,该方法在提高目标分类能力方面具有很大的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Verification of accuracy of knife tip position estimation in liver surgery support system From usability to user experience Optimal window lengths, features and subsets thereof for freezing of gait classification FF OCT with a swept source integrating a SLD and an AOTF 2-P imaging of mouse visual cortex layer 6 corticothalamic feedback during different behavior states
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1