{"title":"Touch Recognition with Attentive End-to-End Model","authors":"Wail El Bani, M. Chetouani","doi":"10.1145/3382507.3418834","DOIUrl":null,"url":null,"abstract":"Touch is the earliest sense to develop and the first mean of contact with the external world. Touch also plays a key role in our socio-emotional communication: we use it to communicate our feelings, elicit strong emotions in others and modulate behavior (e.g compliance). Although its relevance, touch is an understudied modality in Human-Machine-Interaction compared to audition and vision. Most of the social touch recognition systems require a feature engineering step making them difficult to compare and to generalize to other databases. In this paper, we propose an end-to-end approach. We present an attention-based end-to-end model for touch gesture recognition evaluated on two public datasets (CoST and HAART) in the context of the ICMI 15 Social Touch Challenge. Our model gave a similar level of accuracy: 61% for CoST and 68% for HAART and uses self-attention as an alternative to feature engineering and Recurrent Neural Networks.","PeriodicalId":402394,"journal":{"name":"Proceedings of the 2020 International Conference on Multimodal Interaction","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 International Conference on Multimodal Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3382507.3418834","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Touch is the earliest sense to develop and the first mean of contact with the external world. Touch also plays a key role in our socio-emotional communication: we use it to communicate our feelings, elicit strong emotions in others and modulate behavior (e.g compliance). Although its relevance, touch is an understudied modality in Human-Machine-Interaction compared to audition and vision. Most of the social touch recognition systems require a feature engineering step making them difficult to compare and to generalize to other databases. In this paper, we propose an end-to-end approach. We present an attention-based end-to-end model for touch gesture recognition evaluated on two public datasets (CoST and HAART) in the context of the ICMI 15 Social Touch Challenge. Our model gave a similar level of accuracy: 61% for CoST and 68% for HAART and uses self-attention as an alternative to feature engineering and Recurrent Neural Networks.