{"title":"Metric Learning-Based Multimodal Audio-Visual Emotion Recognition","authors":"E. Ghaleb, Mirela C. Popa, S. Asteriadis","doi":"10.1109/MMUL.2019.2960219","DOIUrl":null,"url":null,"abstract":"People express their emotions through multiple channels, such as visual and audio ones. Consequently, automatic emotion recognition can be significantly benefited by multimodal learning. Even-though each modality exhibits unique characteristics; multimodal learning takes advantage of the complementary information of diverse modalities when measuring the same instance, resulting in enhanced understanding of emotions. Yet, their dependencies and relations are not fully exploited in audio–video emotion recognition. Furthermore, learning an effective metric through multimodality is a crucial goal for many applications in machine learning. Therefore, in this article, we propose multimodal emotion recognition metric learning (MERML), learned jointly to obtain a discriminative score and a robust representation in a latent-space for both modalities. The learned metric is efficiently used through the radial basis function (RBF) based support vector machine (SVM) kernel. The evaluation of our framework shows a significant performance, improving the state-of-the-art results on the eNTERFACE and CREMA-D datasets.","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":"27 1","pages":"37-48"},"PeriodicalIF":2.3000,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/MMUL.2019.2960219","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE MultiMedia","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/MMUL.2019.2960219","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 22
Abstract
People express their emotions through multiple channels, such as visual and audio ones. Consequently, automatic emotion recognition can be significantly benefited by multimodal learning. Even-though each modality exhibits unique characteristics; multimodal learning takes advantage of the complementary information of diverse modalities when measuring the same instance, resulting in enhanced understanding of emotions. Yet, their dependencies and relations are not fully exploited in audio–video emotion recognition. Furthermore, learning an effective metric through multimodality is a crucial goal for many applications in machine learning. Therefore, in this article, we propose multimodal emotion recognition metric learning (MERML), learned jointly to obtain a discriminative score and a robust representation in a latent-space for both modalities. The learned metric is efficiently used through the radial basis function (RBF) based support vector machine (SVM) kernel. The evaluation of our framework shows a significant performance, improving the state-of-the-art results on the eNTERFACE and CREMA-D datasets.
期刊介绍:
The magazine contains technical information covering a broad range of issues in multimedia systems and applications. Articles discuss research as well as advanced practice in hardware/software and are expected to span the range from theory to working systems. Especially encouraged are papers discussing experiences with new or advanced systems and subsystems. To avoid unnecessary overlap with existing publications, acceptable papers must have a significant focus on aspects unique to multimedia systems and applications. These aspects are likely to be related to the special needs of multimedia information compared to other electronic data, for example, the size requirements of digital media and the importance of time in the representation of such media. The following list is not exhaustive, but is representative of the topics that are covered: Hardware and software for media compression, coding & processing; Media representations & standards for storage, editing, interchange, transmission & presentation; Hardware platforms supporting multimedia applications; Operating systems suitable for multimedia applications; Storage devices & technologies for multimedia information; Network technologies, protocols, architectures & delivery techniques intended for multimedia; Synchronization issues; Multimedia databases; Formalisms for multimedia information systems & applications; Programming paradigms & languages for multimedia; Multimedia user interfaces; Media creation integration editing & management; Creation & modification of multimedia applications.