Andrea D'Eusanio, A. Simoni, S. Pini, G. Borghi, R. Vezzani, R. Cucchiara
{"title":"A Transformer-Based Network for Dynamic Hand Gesture Recognition","authors":"Andrea D'Eusanio, A. Simoni, S. Pini, G. Borghi, R. Vezzani, R. Cucchiara","doi":"10.1109/3DV50981.2020.00072","DOIUrl":null,"url":null,"abstract":"Transformer-based neural networks represent a successful self-attention mechanism that achieves state-of-the-art results in language understanding and sequence modeling. However, their application to visual data and, in particular, to the dynamic hand gesture recognition task has not yet been deeply investigated. In this paper, we propose a transformer-based architecture for the dynamic hand gesture recognition task. We show that the employment of a single active depth sensor, specifically the usage of depth maps and the surface normals estimated from them, achieves state-of-the-art results, overcoming all the methods available in the literature on two automotive datasets, namely NVidia Dynamic Hand Gesture and Briareo. Moreover, we test the method with other data types available with common RGB-D devices, such as infrared and color data. We also assess the performance in terms of inference time and number of parameters, showing that the proposed framework is suitable for an online in-car infotainment system.","PeriodicalId":293399,"journal":{"name":"2020 International Conference on 3D Vision (3DV)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"20","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on 3D Vision (3DV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DV50981.2020.00072","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 20
Abstract
Transformer-based neural networks represent a successful self-attention mechanism that achieves state-of-the-art results in language understanding and sequence modeling. However, their application to visual data and, in particular, to the dynamic hand gesture recognition task has not yet been deeply investigated. In this paper, we propose a transformer-based architecture for the dynamic hand gesture recognition task. We show that the employment of a single active depth sensor, specifically the usage of depth maps and the surface normals estimated from them, achieves state-of-the-art results, overcoming all the methods available in the literature on two automotive datasets, namely NVidia Dynamic Hand Gesture and Briareo. Moreover, we test the method with other data types available with common RGB-D devices, such as infrared and color data. We also assess the performance in terms of inference time and number of parameters, showing that the proposed framework is suitable for an online in-car infotainment system.
基于变压器的神经网络代表了一种成功的自注意机制,它在语言理解和序列建模方面取得了最先进的结果。然而,它们在视觉数据中的应用,特别是在动态手势识别任务中的应用尚未得到深入研究。在本文中,我们提出了一种基于变压器的动态手势识别架构。我们表明,使用单个主动深度传感器,特别是使用深度图和从深度图中估计的表面法线,实现了最先进的结果,克服了文献中在两个汽车数据集(即NVidia Dynamic Hand Gesture和Briareo)上可用的所有方法。此外,我们用常见RGB-D设备提供的其他数据类型(如红外和彩色数据)测试了该方法。我们还从推理时间和参数数量方面评估了性能,表明所提出的框架适用于在线车载信息娱乐系统。