{"title":"Uncorrelated multiview discriminant locality preserving projection analysis for multiview facial expression recognition","authors":"Sunil Kumar, M. Bhuyan, B. Chakraborty","doi":"10.1145/3009977.3010056","DOIUrl":null,"url":null,"abstract":"Recently several multi-view learning-based methods have been proposed, and they are found to be more efficient in many real world applications. However, existing multi-view learning-based methods are not suitable for finding discriminative directions if the data is multi-modal. In such cases, Locality Preserving Projection (LPP) and/or Local Fisher Discriminant Analysis (LFDA) are found to be more appropriate to capture discriminative directions. Furthermore, existing methods show that imposing uncorrelated constraint onto the common space improves classification accuracy of the system. Hence inspired from the above findings, we propose an Un-correlated Multi-view Discriminant Locality Preserving Projection (UMvDLPP)-based approach. The proposed method searches a common uncorrelated discriminative space for multiple observable spaces. Moreover, the proposed method can also handle the multimodal characteristic, which is inherently embedded in multi-view facial expression recognition (FER) data. Hence, the proposed method is effectively more efficient for multi-view FER problem. Experimental results show that the proposed method outperforms state-of-the-art multi-view learning-based methods.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"1 1","pages":"86:1-86:8"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3009977.3010056","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Recently several multi-view learning-based methods have been proposed, and they are found to be more efficient in many real world applications. However, existing multi-view learning-based methods are not suitable for finding discriminative directions if the data is multi-modal. In such cases, Locality Preserving Projection (LPP) and/or Local Fisher Discriminant Analysis (LFDA) are found to be more appropriate to capture discriminative directions. Furthermore, existing methods show that imposing uncorrelated constraint onto the common space improves classification accuracy of the system. Hence inspired from the above findings, we propose an Un-correlated Multi-view Discriminant Locality Preserving Projection (UMvDLPP)-based approach. The proposed method searches a common uncorrelated discriminative space for multiple observable spaces. Moreover, the proposed method can also handle the multimodal characteristic, which is inherently embedded in multi-view facial expression recognition (FER) data. Hence, the proposed method is effectively more efficient for multi-view FER problem. Experimental results show that the proposed method outperforms state-of-the-art multi-view learning-based methods.