{"title":"Preliminary Study of Object Recognition by Converting Physical Responses to Images in Two Dimensions","authors":"Kazuki Yane, T. Nozaki","doi":"10.1109/ICM54990.2023.10101938","DOIUrl":null,"url":null,"abstract":"The use of robots is desired as a replacement for human labor. However, it is difficult for robots to respond flexibly to changes in objects and environments and perform tasks. Recently, many systems have been proposed that can flexibly respond to changes by generating robot motions using machine learning. Many machine learning methods use a camera to acquire environmental information, and feature extraction is performed using images acquired from the camera using CNN (Convolutional Neural Network), CAE (Convolutional Auto Encoder), or other methods. Many methods estimate the input values in the next step by inputting the image features, position data and reaction force data acquired from the robot together into the RNN (Recurrent Neural Network), etc. However, in most cases, the relationship between the image and robot data is learned without explicitly stating it. Therefore, in this paper, the data acquired from the robot is converted to images and used in combination with images from the camera to make the interaction between the robot and the environment explicit and to improve the estimation accuracy of NNs. In simulations, the proposed method was used to perform the task of discriminating the target of motion, and the high estimation accuracy was confirmed. In the future, we plan to use this method as input data for motion generation to generate motion according to the object.","PeriodicalId":416176,"journal":{"name":"2023 IEEE International Conference on Mechatronics (ICM)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Mechatronics (ICM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICM54990.2023.10101938","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The use of robots is desired as a replacement for human labor. However, it is difficult for robots to respond flexibly to changes in objects and environments and perform tasks. Recently, many systems have been proposed that can flexibly respond to changes by generating robot motions using machine learning. Many machine learning methods use a camera to acquire environmental information, and feature extraction is performed using images acquired from the camera using CNN (Convolutional Neural Network), CAE (Convolutional Auto Encoder), or other methods. Many methods estimate the input values in the next step by inputting the image features, position data and reaction force data acquired from the robot together into the RNN (Recurrent Neural Network), etc. However, in most cases, the relationship between the image and robot data is learned without explicitly stating it. Therefore, in this paper, the data acquired from the robot is converted to images and used in combination with images from the camera to make the interaction between the robot and the environment explicit and to improve the estimation accuracy of NNs. In simulations, the proposed method was used to perform the task of discriminating the target of motion, and the high estimation accuracy was confirmed. In the future, we plan to use this method as input data for motion generation to generate motion according to the object.