基于自适应特征加权的视觉-触觉融合物体分类方法

IF 2.3 4区 计算机科学 Q2 Computer Science International Journal of Advanced Robotic Systems Pub Date : 2023-07-01 DOI:10.1177/17298806231191947
Peng Zhang, Lu Bai, Dongri Shan, Xiaofang Wang, Shuang Li, W. Zou, Zhenxue Chen
{"title":"基于自适应特征加权的视觉-触觉融合物体分类方法","authors":"Peng Zhang, Lu Bai, Dongri Shan, Xiaofang Wang, Shuang Li, W. Zou, Zhenxue Chen","doi":"10.1177/17298806231191947","DOIUrl":null,"url":null,"abstract":"Visual–tactile fusion information plays a crucial role in robotic object classification. The fusion module in existing visual–tactile fusion models directly splices visual and tactile features at the feature layer; however, for different objects, the contributions of visual features and tactile features to classification are different. Moreover, direct concatenation may ignore features that are more beneficial for classification and will also increase computational costs and reduce model classification efficiency. To utilize object feature information more effectively and further improve the efficiency and accuracy of robotic object classification, we propose a visual–tactile fusion object classification method based on adaptive feature weighting in this article. First, a lightweight feature extraction module is used to extract the visual and tactile features of each object. Then, the two feature vectors are input into an adaptive weighted fusion module. Finally, the fused feature vector is input into the fully connected layer for classification, yielding the categories and physical attributes of the objects. In this article, extensive experiments are performed with the Penn Haptic Adjective Corpus 2 public dataset and the newly developed Visual-Haptic Adjective Corpus 52 dataset. The experimental results demonstrate that for the public dataset Penn Haptic Adjective Corpus 2, our method achieves a value of 0.9750 in terms of the area under the curve. Compared with the highest area under the curve obtained by the existing state-of-the-art methods, our method improves by 1.92%. Moreover, compared with the existing state-of-the-art methods, our method achieves the best results in training time and inference time; while for the novel Visual-Haptic Adjective Corpus 52 dataset, our method achieves values of 0.9827 and 0.9850 in terms of the area under the curve and accuracy metrics, respectively. Furthermore, the inference time reaches 1.559 s/sheet, demonstrating the effectiveness of the proposed method.","PeriodicalId":50343,"journal":{"name":"International Journal of Advanced Robotic Systems","volume":" ","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Visual–tactile fusion object classification method based on adaptive feature weighting\",\"authors\":\"Peng Zhang, Lu Bai, Dongri Shan, Xiaofang Wang, Shuang Li, W. Zou, Zhenxue Chen\",\"doi\":\"10.1177/17298806231191947\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Visual–tactile fusion information plays a crucial role in robotic object classification. The fusion module in existing visual–tactile fusion models directly splices visual and tactile features at the feature layer; however, for different objects, the contributions of visual features and tactile features to classification are different. Moreover, direct concatenation may ignore features that are more beneficial for classification and will also increase computational costs and reduce model classification efficiency. To utilize object feature information more effectively and further improve the efficiency and accuracy of robotic object classification, we propose a visual–tactile fusion object classification method based on adaptive feature weighting in this article. First, a lightweight feature extraction module is used to extract the visual and tactile features of each object. Then, the two feature vectors are input into an adaptive weighted fusion module. Finally, the fused feature vector is input into the fully connected layer for classification, yielding the categories and physical attributes of the objects. In this article, extensive experiments are performed with the Penn Haptic Adjective Corpus 2 public dataset and the newly developed Visual-Haptic Adjective Corpus 52 dataset. The experimental results demonstrate that for the public dataset Penn Haptic Adjective Corpus 2, our method achieves a value of 0.9750 in terms of the area under the curve. Compared with the highest area under the curve obtained by the existing state-of-the-art methods, our method improves by 1.92%. Moreover, compared with the existing state-of-the-art methods, our method achieves the best results in training time and inference time; while for the novel Visual-Haptic Adjective Corpus 52 dataset, our method achieves values of 0.9827 and 0.9850 in terms of the area under the curve and accuracy metrics, respectively. Furthermore, the inference time reaches 1.559 s/sheet, demonstrating the effectiveness of the proposed method.\",\"PeriodicalId\":50343,\"journal\":{\"name\":\"International Journal of Advanced Robotic Systems\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2023-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Advanced Robotic Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1177/17298806231191947\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Advanced Robotic Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1177/17298806231191947","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0

摘要

视觉-触觉融合信息在机器人物体分类中起着至关重要的作用。现有视觉-触觉融合模型中的融合模块在特征层直接拼接视觉和触觉特征;然而,对于不同的物体,视觉特征和触觉特征对分类的贡献是不同的。此外,直接级联可能忽略更有利于分类的特征,并且还会增加计算成本并降低模型分类效率。为了更有效地利用物体特征信息,进一步提高机器人物体分类的效率和准确性,本文提出了一种基于自适应特征加权的视觉-触觉融合物体分类方法。首先,使用轻量级特征提取模块来提取每个对象的视觉和触觉特征。然后,将这两个特征向量输入到自适应加权融合模块中。最后,将融合后的特征向量输入到全连通层中进行分类,得到对象的类别和物理属性。在本文中,使用Penn触觉形容词语料库2公共数据集和新开发的视觉触觉形容词语料52数据集进行了广泛的实验。实验结果表明,对于公共数据集Penn Haptic形容词语料库2,我们的方法在曲线下面积方面达到了0.9750的值。与现有最先进方法获得的曲线下最高面积相比,我们的方法提高了1.92%。此外,与现有最新方法相比,我们在训练时间和推理时间方面取得了最好的结果;而对于新的视觉触觉形容词语料库52数据集,我们的方法在曲线下面积和准确性指标方面分别达到0.9827和0.9850。此外,推理时间达到1.559s/张,证明了该方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Visual–tactile fusion object classification method based on adaptive feature weighting
Visual–tactile fusion information plays a crucial role in robotic object classification. The fusion module in existing visual–tactile fusion models directly splices visual and tactile features at the feature layer; however, for different objects, the contributions of visual features and tactile features to classification are different. Moreover, direct concatenation may ignore features that are more beneficial for classification and will also increase computational costs and reduce model classification efficiency. To utilize object feature information more effectively and further improve the efficiency and accuracy of robotic object classification, we propose a visual–tactile fusion object classification method based on adaptive feature weighting in this article. First, a lightweight feature extraction module is used to extract the visual and tactile features of each object. Then, the two feature vectors are input into an adaptive weighted fusion module. Finally, the fused feature vector is input into the fully connected layer for classification, yielding the categories and physical attributes of the objects. In this article, extensive experiments are performed with the Penn Haptic Adjective Corpus 2 public dataset and the newly developed Visual-Haptic Adjective Corpus 52 dataset. The experimental results demonstrate that for the public dataset Penn Haptic Adjective Corpus 2, our method achieves a value of 0.9750 in terms of the area under the curve. Compared with the highest area under the curve obtained by the existing state-of-the-art methods, our method improves by 1.92%. Moreover, compared with the existing state-of-the-art methods, our method achieves the best results in training time and inference time; while for the novel Visual-Haptic Adjective Corpus 52 dataset, our method achieves values of 0.9827 and 0.9850 in terms of the area under the curve and accuracy metrics, respectively. Furthermore, the inference time reaches 1.559 s/sheet, demonstrating the effectiveness of the proposed method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.50
自引率
0.00%
发文量
65
审稿时长
6 months
期刊介绍: International Journal of Advanced Robotic Systems (IJARS) is a JCR ranked, peer-reviewed open access journal covering the full spectrum of robotics research. The journal is addressed to both practicing professionals and researchers in the field of robotics and its specialty areas. IJARS features fourteen topic areas each headed by a Topic Editor-in-Chief, integrating all aspects of research in robotics under the journal''s domain.
期刊最新文献
Expanded photo-model-based stereo vision pose estimation using a shooting distance unknown photo Enhanced lightweight deep network for efficient livestock detection in grazing areas Manipulate mechanism design and synchronous motion application for driving simulator A general method for the manipulability analysis of serial robot manipulators Design, simulation, and experiment for the end effector of a spherical fruit picking robot
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1