Ryusei Shima, He Yunan, O. Fukuda, H. Okumura, K. Arai, N. Bu
{"title":"Object classification with deep convolutional neural network using spatial information","authors":"Ryusei Shima, He Yunan, O. Fukuda, H. Okumura, K. Arai, N. Bu","doi":"10.1109/ICIIBMS.2017.8279704","DOIUrl":null,"url":null,"abstract":"This paper proposes a prosthetic control method which incorporates a novel object classifier with a conventional EMG-based motion classifier. The proposed method uses not only color information but spatial information to reduce the misclassification in previous research. The depth images are created based on spatial information which is acquired by Kinect. The deep convolutional neural network is adopted for the object classification, and the posture of the prosthetic hand is controlled based on the classification result of the object. To verify the validity of the proposed control method, the experiments have been carried out with 6 target objects. The 300 images for each target object were acquired in various directions. Their shapes resemble each other in particular perspective. We trained the deep convolutional neural network using the hybrid images which involve gray scale and depth information. In the experiments, the depth information improved the learning performance with high classification accuracy. These results revealed that the proposed method has high potential to improve object classification ability.","PeriodicalId":122969,"journal":{"name":"2017 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIIBMS.2017.8279704","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
This paper proposes a prosthetic control method which incorporates a novel object classifier with a conventional EMG-based motion classifier. The proposed method uses not only color information but spatial information to reduce the misclassification in previous research. The depth images are created based on spatial information which is acquired by Kinect. The deep convolutional neural network is adopted for the object classification, and the posture of the prosthetic hand is controlled based on the classification result of the object. To verify the validity of the proposed control method, the experiments have been carried out with 6 target objects. The 300 images for each target object were acquired in various directions. Their shapes resemble each other in particular perspective. We trained the deep convolutional neural network using the hybrid images which involve gray scale and depth information. In the experiments, the depth information improved the learning performance with high classification accuracy. These results revealed that the proposed method has high potential to improve object classification ability.