Zineb Abderrahmane, G. Ganesh, A. Crosnier, A. Cherubini
{"title":"Visuo-Tactile Recognition of Daily-Life Objects Never Seen or Touched Before","authors":"Zineb Abderrahmane, G. Ganesh, A. Crosnier, A. Cherubini","doi":"10.1109/ICARCV.2018.8581230","DOIUrl":null,"url":null,"abstract":"This study proposes a visuo-tactile Zero-Shot object recognition framework. The proposed framework recognizes a set of novel objects for which no tactile or visual training data are available. It uses visuo-tactile training data collected from known objects to recognize the novel ones, given their attributes. This framework extends the haptic Zero-Shot Learning framework that we proposed in [1] with vision, which enables a multimodal recognition system. In our test with the PHAC-2 dataset, the system was able to get a recognition accuracy of 72% among 6 objects that were never touched or seen during the training phase.","PeriodicalId":395380,"journal":{"name":"2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICARCV.2018.8581230","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11
Abstract
This study proposes a visuo-tactile Zero-Shot object recognition framework. The proposed framework recognizes a set of novel objects for which no tactile or visual training data are available. It uses visuo-tactile training data collected from known objects to recognize the novel ones, given their attributes. This framework extends the haptic Zero-Shot Learning framework that we proposed in [1] with vision, which enables a multimodal recognition system. In our test with the PHAC-2 dataset, the system was able to get a recognition accuracy of 72% among 6 objects that were never touched or seen during the training phase.