Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272742
Ramachandra Raghavendra, K. Raja, S. Venkatesh, C. Busch
The Face Recognition System (FRS) is known to be vulnerable to the attacks using the morphed face. As the use of face characteristics are mandatory in the electronic passport (ePass), morphing attacks have raised the potential concerns in the border security. In this paper, we analyze the vulnerability of the FRS to the new attack performed using the averaged face. The averaged face is generated by simple pixel level averaging of two face images corresponding to two different subjects. We benchmark the vulnerability of the commercial FRS to both conventional morphing and averaging based face attacks. We further propose a novel algorithm based on the collaborative representation of the micro-texture features that are extracted from the colour space to reliably detect both morphed and averaged face attacks on the FRS. Extensive experiments are carried out on the newly constructed morphed and averaged face image database with 163 subjects. The database is built by considering the real-life scenario of the passport issuance that typically accepts the printed passport photo from the applicant that is further scanned and stored in the ePass. Thus, the newly constructed database is built to have the print-scanned bonafide, morphed and averaged face samples. The obtained results have demonstrated the improved performance of the proposed scheme on print-scanned morphed and averaged face database.
{"title":"Face morphing versus face averaging: Vulnerability and detection","authors":"Ramachandra Raghavendra, K. Raja, S. Venkatesh, C. Busch","doi":"10.1109/BTAS.2017.8272742","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272742","url":null,"abstract":"The Face Recognition System (FRS) is known to be vulnerable to the attacks using the morphed face. As the use of face characteristics are mandatory in the electronic passport (ePass), morphing attacks have raised the potential concerns in the border security. In this paper, we analyze the vulnerability of the FRS to the new attack performed using the averaged face. The averaged face is generated by simple pixel level averaging of two face images corresponding to two different subjects. We benchmark the vulnerability of the commercial FRS to both conventional morphing and averaging based face attacks. We further propose a novel algorithm based on the collaborative representation of the micro-texture features that are extracted from the colour space to reliably detect both morphed and averaged face attacks on the FRS. Extensive experiments are carried out on the newly constructed morphed and averaged face image database with 163 subjects. The database is built by considering the real-life scenario of the passport issuance that typically accepts the printed passport photo from the applicant that is further scanned and stored in the ePass. Thus, the newly constructed database is built to have the print-scanned bonafide, morphed and averaged face samples. The obtained results have demonstrated the improved performance of the proposed scheme on print-scanned morphed and averaged face database.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128125054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272681
P. Drozdowski, C. Rathgeb, C. Busch
We present a multi-iris indexing system for efficient and accurate large-scale identification. The system is based on Bloom filters and binary search trees. We describe and empirically evaluate several possible information fusion strategies for the system. Those experiments are performed using a combination of several publicly available datasets; the proposed system is tested in an open-set identification scenario consisting of 6,000 genuine and 100,000 impostor transactions. The system maintains the near-optimal biometric performance of an iris-code, score fusion based baseline system, while reducing the required lookup workload to less than 1% thereof.
{"title":"Multi-iris indexing and retrieval: Fusion strategies for bloom filter-based search structures","authors":"P. Drozdowski, C. Rathgeb, C. Busch","doi":"10.1109/BTAS.2017.8272681","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272681","url":null,"abstract":"We present a multi-iris indexing system for efficient and accurate large-scale identification. The system is based on Bloom filters and binary search trees. We describe and empirically evaluate several possible information fusion strategies for the system. Those experiments are performed using a combination of several publicly available datasets; the proposed system is tested in an open-set identification scenario consisting of 6,000 genuine and 100,000 impostor transactions. The system maintains the near-optimal biometric performance of an iris-code, score fusion based baseline system, while reducing the required lookup workload to less than 1% thereof.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126644534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272756
Naman Kohli, Daksha Yadav, Mayank Vatsa, Richa Singh, A. Noore
Reliability and accuracy of iris biometric modality has prompted its large-scale deployment for critical applications such as border control and national ID projects. The extensive growth of iris recognition systems has raised apprehensions about susceptibility of these systems to various attacks. In the past, researchers have examined the impact of various iris presentation attacks such as textured contact lenses and print attacks. In this research, we present a novel presentation attack using deep learning based synthetic iris generation. Utilizing the generative capability of deep con-volutional generative adversarial networks and iris quality metrics, we propose a new framework, named as iDCGAN (iris deep convolutional generative adversarial network) for generating realistic appearing synthetic iris images. We demonstrate the effect of these synthetically generated iris images as presentation attack on iris recognition by using a commercial system. The state-of-the-art presentation attack detection framework, DESIST is utilized to analyze if it can discriminate these synthetically generated iris images from real images. The experimental results illustrate that mitigating the proposed synthetic presentation attack is of paramount importance.
{"title":"Synthetic iris presentation attack using iDCGAN","authors":"Naman Kohli, Daksha Yadav, Mayank Vatsa, Richa Singh, A. Noore","doi":"10.1109/BTAS.2017.8272756","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272756","url":null,"abstract":"Reliability and accuracy of iris biometric modality has prompted its large-scale deployment for critical applications such as border control and national ID projects. The extensive growth of iris recognition systems has raised apprehensions about susceptibility of these systems to various attacks. In the past, researchers have examined the impact of various iris presentation attacks such as textured contact lenses and print attacks. In this research, we present a novel presentation attack using deep learning based synthetic iris generation. Utilizing the generative capability of deep con-volutional generative adversarial networks and iris quality metrics, we propose a new framework, named as iDCGAN (iris deep convolutional generative adversarial network) for generating realistic appearing synthetic iris images. We demonstrate the effect of these synthetically generated iris images as presentation attack on iris recognition by using a commercial system. The state-of-the-art presentation attack detection framework, DESIST is utilized to analyze if it can discriminate these synthetically generated iris images from real images. The experimental results illustrate that mitigating the proposed synthetic presentation attack is of paramount importance.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115286605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272744
Daksha Yadav, Naman Kohli, Mayank Vatsa, Richa Singh, A. Noore
Iris recognition in visible spectrum has developed into an active area of research. This has elevated the importance of efficient presentation attack detection algorithms, particularly in security based critical applications. In this paper, we present the first detailed analysis of the effect of textured contact lenses on iris recognition in visible spectrum. We introduce the first contact lens database in visible spectrum, Unconstrained Visible Contact Lens Iris (UVCLI) Database, containing samples from 70 classes with subjects wearing textured contact lenses in indoor and outdoor environments across multiple sessions. We observe that textured contact lenses degrade the visible spectrum iris recognition performance by over 25% and thus, may be utilized intentionally or unintentionally to attack existing iris recognition systems. Next, three iris presentation attack detection (PAD) algorithms are evaluated on the proposed database and highest PAD accuracy of 82.85%c is observed. This illustrates that there is a significant scope of improvement in developing efficient PAD algorithms for detection of textured contact lenses in unconstrained visible spectrum iris images.
{"title":"Unconstrained visible spectrum iris with textured contact lens variations: Database and benchmarking","authors":"Daksha Yadav, Naman Kohli, Mayank Vatsa, Richa Singh, A. Noore","doi":"10.1109/BTAS.2017.8272744","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272744","url":null,"abstract":"Iris recognition in visible spectrum has developed into an active area of research. This has elevated the importance of efficient presentation attack detection algorithms, particularly in security based critical applications. In this paper, we present the first detailed analysis of the effect of textured contact lenses on iris recognition in visible spectrum. We introduce the first contact lens database in visible spectrum, Unconstrained Visible Contact Lens Iris (UVCLI) Database, containing samples from 70 classes with subjects wearing textured contact lenses in indoor and outdoor environments across multiple sessions. We observe that textured contact lenses degrade the visible spectrum iris recognition performance by over 25% and thus, may be utilized intentionally or unintentionally to attack existing iris recognition systems. Next, three iris presentation attack detection (PAD) algorithms are evaluated on the proposed database and highest PAD accuracy of 82.85%c is observed. This illustrates that there is a significant scope of improvement in developing efficient PAD algorithms for detection of textured contact lenses in unconstrained visible spectrum iris images.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116300528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272763
David Yambay, Benedict Becker, Naman Kohli, Daksha Yadav, A. Czajka, K. Bowyer, S. Schuckers, Richa Singh, Mayank Vatsa, A. Noore, Diego Gragnaniello, Carlo Sansone, L. Verdoliva, Lingxiao He, Yiwei Ru, Haiqing Li, Nianfeng Liu, Zhenan Sun, T. Tan
Presentation attacks such as using a contact lens with a printed pattern or printouts of an iris can be utilized to bypass a biometric security system. The first international iris liveness competition was launched in 2013 in order to assess the performance of presentation attack detection (PAD) algorithms, with a second competition in 2015. This paper presents results of the third competition, LivDet-Iris 2017. Three software-based approaches to Presentation Attack Detection were submitted. Four datasets of live and spoof images were tested with an additional cross-sensor test. New datasets and novel situations of data have resulted in this competition being of a higher difficulty than previous competitions. Anonymous received the best results with a rate of rejected live samples of 3.36% and rate of accepted spoof samples of 14.71%. The results show that even with advances, printed iris attacks as well as patterned contacts lenses are still difficult for software-based systems to detect. Printed iris images were easier to be differentiated from live images in comparison to patterned contact lenses as was also seen in previous competitions.
{"title":"LivDet iris 2017 — Iris liveness detection competition 2017","authors":"David Yambay, Benedict Becker, Naman Kohli, Daksha Yadav, A. Czajka, K. Bowyer, S. Schuckers, Richa Singh, Mayank Vatsa, A. Noore, Diego Gragnaniello, Carlo Sansone, L. Verdoliva, Lingxiao He, Yiwei Ru, Haiqing Li, Nianfeng Liu, Zhenan Sun, T. Tan","doi":"10.1109/BTAS.2017.8272763","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272763","url":null,"abstract":"Presentation attacks such as using a contact lens with a printed pattern or printouts of an iris can be utilized to bypass a biometric security system. The first international iris liveness competition was launched in 2013 in order to assess the performance of presentation attack detection (PAD) algorithms, with a second competition in 2015. This paper presents results of the third competition, LivDet-Iris 2017. Three software-based approaches to Presentation Attack Detection were submitted. Four datasets of live and spoof images were tested with an additional cross-sensor test. New datasets and novel situations of data have resulted in this competition being of a higher difficulty than previous competitions. Anonymous received the best results with a rate of rejected live samples of 3.36% and rate of accepted spoof samples of 14.71%. The results show that even with advances, printed iris attacks as well as patterned contacts lenses are still difficult for software-based systems to detect. Printed iris images were easier to be differentiated from live images in comparison to patterned contact lenses as was also seen in previous competitions.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128246943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272699
Dehua Song, Jufu Feng
The ridges of fingerprint contain enormous discriminative information for fingerprint indexing, however it is hard to depict the structure of ridges for rule-based methods because of nonlinear distortion. This paper investigates to represent the structure of ridges by Deep Convolutional Neural Network (DCNN). The indexing approach partitions the fingerprint image into increasing fine sub-region and extracts feature from each sub-region by DCNN, forming pyramid deep convolutional feature, to represent the global patterns and local details (especially minutiae). Extensive experimental results show that the proposed method achieves better performance on accuracy and efficiency than other prominent indexing approaches. Finally, occlusion sensitivity, visualization and fingerprint reconstruction techniques are employed to explore which attributes of ridges are described in deep convolutional feature.
{"title":"Fingerprint indexing based on pyramid deep convolutional feature","authors":"Dehua Song, Jufu Feng","doi":"10.1109/BTAS.2017.8272699","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272699","url":null,"abstract":"The ridges of fingerprint contain enormous discriminative information for fingerprint indexing, however it is hard to depict the structure of ridges for rule-based methods because of nonlinear distortion. This paper investigates to represent the structure of ridges by Deep Convolutional Neural Network (DCNN). The indexing approach partitions the fingerprint image into increasing fine sub-region and extracts feature from each sub-region by DCNN, forming pyramid deep convolutional feature, to represent the global patterns and local details (especially minutiae). Extensive experimental results show that the proposed method achieves better performance on accuracy and efficiency than other prominent indexing approaches. Finally, occlusion sensitivity, visualization and fingerprint reconstruction techniques are employed to explore which attributes of ridges are described in deep convolutional feature.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129356256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272714
J. Hube
In an operational setting of key practical importance for a biometric application deployment is the ability to set thresholds to meet error rate targets. Consequently there is a need to consider how output scores from multi-modal score-level fusion are defined. We show a method to ensure these fused scores are consistent with a known input score definition. We derive fusion formulae for the case of input scores based on false acceptance rates. We provide examples to highlight implementation issues.
{"title":"Formulae for consistent biometric score level fusion","authors":"J. Hube","doi":"10.1109/BTAS.2017.8272714","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272714","url":null,"abstract":"In an operational setting of key practical importance for a biometric application deployment is the ability to set thresholds to meet error rate targets. Consequently there is a need to consider how output scores from multi-modal score-level fusion are defined. We show a method to ensure these fused scores are consistent with a known input score definition. We derive fusion formulae for the case of input scores based on false acceptance rates. We provide examples to highlight implementation issues.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129650728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272717
Luca Ghiani, G. Marcialis, F. Roli
The similarities among different acquisitions of the same fingerprint have never been taken into account, so far, in the feature space designed to detect fingerprint presentation attacks. Actually, the existence of such resemblances has only been shown in a recent work where the authors have been able to describe what they called the “user-specific effect”. We present in this paper a first attempt to take advantage of this in order to improve the performance of a FPAD system. In particular, we conceived a binary code of three bits aimed to “detect” such effect. Coupled with a classifier trained according to the standard protocol followed, for example, in the LivDet competition, this approach allowed us to get a better accuracy compared to that obtained with the “generic users” classifier alone.
{"title":"Fingerprint presentation attacks detection based on the user-specific effect","authors":"Luca Ghiani, G. Marcialis, F. Roli","doi":"10.1109/BTAS.2017.8272717","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272717","url":null,"abstract":"The similarities among different acquisitions of the same fingerprint have never been taken into account, so far, in the feature space designed to detect fingerprint presentation attacks. Actually, the existence of such resemblances has only been shown in a recent work where the authors have been able to describe what they called the “user-specific effect”. We present in this paper a first attempt to take advantage of this in order to improve the performance of a FPAD system. In particular, we conceived a binary code of three bits aimed to “detect” such effect. Coupled with a classifier trained according to the standard protocol followed, for example, in the LivDet competition, this approach allowed us to get a better accuracy compared to that obtained with the “generic users” classifier alone.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"47 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122193436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272710
Christof Kauba, L. Debiasi, A. Uhl
Being aware of the origin (source sensor) of an iris images offers several advantages. Identifying the specific sensor unit supports ensuring the integrity and authenticity of iris images and thus detecting insertion attacks at a biometric system. Moreover, by knowing the sensor model selective processing, such as image enhancements, becomes feasible. In order to determine the origin (i.e. dataset) of near-infrared (NIR) and visible spectrum iris/ocular images, we evaluate the performance of three different approaches, a photo response non-uniformity (PRNU) based and an image texture feature based one, and the fusion of both. Our first set of experiments includes 19 different datasets comprising different sensors and image resolutions. The second set includes 6 different camera models with 5 instances each. We evaluate the applicability of the three approaches in these test scenarios from a forensic and non-forensic perspective.
{"title":"Identifying the origin of Iris images based on fusion of local image descriptors and PRNU based techniques","authors":"Christof Kauba, L. Debiasi, A. Uhl","doi":"10.1109/BTAS.2017.8272710","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272710","url":null,"abstract":"Being aware of the origin (source sensor) of an iris images offers several advantages. Identifying the specific sensor unit supports ensuring the integrity and authenticity of iris images and thus detecting insertion attacks at a biometric system. Moreover, by knowing the sensor model selective processing, such as image enhancements, becomes feasible. In order to determine the origin (i.e. dataset) of near-infrared (NIR) and visible spectrum iris/ocular images, we evaluate the performance of three different approaches, a photo response non-uniformity (PRNU) based and an image texture feature based one, and the fusion of both. Our first set of experiments includes 19 different datasets comprising different sensors and image resolutions. The second set includes 6 different camera models with 5 instances each. We evaluate the applicability of the three approaches in these test scenarios from a forensic and non-forensic perspective.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124325481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272741
Sarasi Munasinghe, C. Fookes, S. Sridharan
Video-based facial expression recognition is an open research challenge not solved by the current state-of-the-art. On the other hand, static image based emotion recognition is highly important when videos are not available and human emotions need to be determined from a single shot only. This paper proposes sequential-based and image-based tied factor analysis frameworks with a deep network that simultaneously addresses these two problems. For video-based data, we first extract deep convolutional temporal appearance features from image sequences and then these features are fed into a generative model that constructs a low-dimensional observed space for all individuals, depending on the facial expression sequences. After learning the sequential expression components of the transition matrices among the expression manifolds, we use a Gaussian probabilistic approach to design an efficient classifier for temporal facial expression recognition. Furthermore, we analyse the utility of proposed video-based methods for image-based emotion recognition learning static tied factor analysis parameters. Meanwhile, this model can be used to predict the expressive face image sequences from given neutral faces. Recognition results achieved on three public benchmark databases: CK+, JAFFE, and FER2013, clearly indicate our approach achieves effective performance over the current techniques of handling sequential and static facial expression variations.
{"title":"Deep features-based expression-invariant tied factor analysis for emotion recognition","authors":"Sarasi Munasinghe, C. Fookes, S. Sridharan","doi":"10.1109/BTAS.2017.8272741","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272741","url":null,"abstract":"Video-based facial expression recognition is an open research challenge not solved by the current state-of-the-art. On the other hand, static image based emotion recognition is highly important when videos are not available and human emotions need to be determined from a single shot only. This paper proposes sequential-based and image-based tied factor analysis frameworks with a deep network that simultaneously addresses these two problems. For video-based data, we first extract deep convolutional temporal appearance features from image sequences and then these features are fed into a generative model that constructs a low-dimensional observed space for all individuals, depending on the facial expression sequences. After learning the sequential expression components of the transition matrices among the expression manifolds, we use a Gaussian probabilistic approach to design an efficient classifier for temporal facial expression recognition. Furthermore, we analyse the utility of proposed video-based methods for image-based emotion recognition learning static tied factor analysis parameters. Meanwhile, this model can be used to predict the expressive face image sequences from given neutral faces. Recognition results achieved on three public benchmark databases: CK+, JAFFE, and FER2013, clearly indicate our approach achieves effective performance over the current techniques of handling sequential and static facial expression variations.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117172718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}