Pub Date : 2015-12-17DOI: 10.1109/BTAS.2015.7358748
Pouya Samangouei, Vishal M. Patel, R. Chellappa
We present a method using facial attributes for continuous authentication of smartphone users. The binary attribute classifiers are trained using PubFig dataset and provide compact visual descriptions of faces. The learned classifiers are applied to the image of the current user of a mobile device to extract the attributes and then authentication is done by simply comparing the difference between the acquired attributes and the enrolled attributes of the original user. Extensive experiments on two publicly available unconstrained mobile face video datasets show that our method is able to capture meaningful attributes of faces and performs better than the previously proposed LBP-based authentication method.
{"title":"Attribute-based continuous user authentication on mobile devices","authors":"Pouya Samangouei, Vishal M. Patel, R. Chellappa","doi":"10.1109/BTAS.2015.7358748","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358748","url":null,"abstract":"We present a method using facial attributes for continuous authentication of smartphone users. The binary attribute classifiers are trained using PubFig dataset and provide compact visual descriptions of faces. The learned classifiers are applied to the image of the current user of a mobile device to extract the attributes and then authentication is done by simply comparing the difference between the acquired attributes and the enrolled attributes of the original user. Extensive experiments on two publicly available unconstrained mobile face video datasets show that our method is able to capture meaningful attributes of faces and performs better than the previously proposed LBP-based authentication method.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132665052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-17DOI: 10.1109/BTAS.2015.7358793
Ramya Srinivasan, A. Roy-Chowdhury
We propose a robust unsupervised method for face recognition wherein saliency maps of second order statistics are employed as image descriptors. In particular, we leverage upon region covariance matrices (RCM) and their enhancement based on sigma sets for constructing saliency maps of face images. Sigma sets are of low dimension, robust to rotation and illumination changes and are efficient in distance evaluation. Further, they provide a natural way to combine multiple features and hence facilitate a simple mechanism for building otherwise tedious saliency maps. Using saliency maps thus constructed as the face descriptors brings in an additional advantage of emphasizing the most discriminative regions of a face and thereby improve recognition performance. We demonstrate the effectiveness of the proposed method for face photo-sketch recognition, wherein we achieve performance comparable to state-of-the-art without having to do sketch synthesis.
{"title":"Robust face recognition based on saliency maps of sigma sets","authors":"Ramya Srinivasan, A. Roy-Chowdhury","doi":"10.1109/BTAS.2015.7358793","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358793","url":null,"abstract":"We propose a robust unsupervised method for face recognition wherein saliency maps of second order statistics are employed as image descriptors. In particular, we leverage upon region covariance matrices (RCM) and their enhancement based on sigma sets for constructing saliency maps of face images. Sigma sets are of low dimension, robust to rotation and illumination changes and are efficient in distance evaluation. Further, they provide a natural way to combine multiple features and hence facilitate a simple mechanism for building otherwise tedious saliency maps. Using saliency maps thus constructed as the face descriptors brings in an additional advantage of emphasizing the most discriminative regions of a face and thereby improve recognition performance. We demonstrate the effectiveness of the proposed method for face photo-sketch recognition, wherein we achieve performance comparable to state-of-the-art without having to do sketch synthesis.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130913209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-17DOI: 10.1109/BTAS.2015.7358758
Nathan J. Short, Shuowen Hu, Prudhvi K. Gurram, K. Gurton
Face recognition research has primarily focused on the visible spectrum, due to the prevalence and low cost of visible cameras. However, face recognition in the visible spectrum is sensitive to illumination variations, and is infeasible in low-light or nighttime settings. In contrast, thermal imaging acquires naturally emitted radiation from facial skin tissue, and is therefore ideal for nighttime surveillance and intelligence gathering operations. However, conventional thermal face imagery lacks textural and geometrics details that are present in visible spectrum face signatures. In this work, we further explore the impact of polarimetric imaging in the LWIR spectrum for face recognition. Polarization-state information provides textural and geometric facial details unavailable with conventional thermal imaging. Since the frequency content of the conventional thermal, polarimetric thermal, and visible images is quite different, we propose a spatial correlation based procedure to optimize the filtering of polarimetric thermal and visible face images to further facilitate cross-spectrum face recognition. Additionally, we use a more extensive gallery database to more robustly demonstrate an improvement in the performance of cross-spectrum face recognition using polarimetric thermal imaging.
{"title":"Exploiting polarization-state information for cross-spectrum face recognition","authors":"Nathan J. Short, Shuowen Hu, Prudhvi K. Gurram, K. Gurton","doi":"10.1109/BTAS.2015.7358758","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358758","url":null,"abstract":"Face recognition research has primarily focused on the visible spectrum, due to the prevalence and low cost of visible cameras. However, face recognition in the visible spectrum is sensitive to illumination variations, and is infeasible in low-light or nighttime settings. In contrast, thermal imaging acquires naturally emitted radiation from facial skin tissue, and is therefore ideal for nighttime surveillance and intelligence gathering operations. However, conventional thermal face imagery lacks textural and geometrics details that are present in visible spectrum face signatures. In this work, we further explore the impact of polarimetric imaging in the LWIR spectrum for face recognition. Polarization-state information provides textural and geometric facial details unavailable with conventional thermal imaging. Since the frequency content of the conventional thermal, polarimetric thermal, and visible images is quite different, we propose a spatial correlation based procedure to optimize the filtering of polarimetric thermal and visible face images to further facilitate cross-spectrum face recognition. Additionally, we use a more extensive gallery database to more robustly demonstrate an improvement in the performance of cross-spectrum face recognition using polarimetric thermal imaging.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131930059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-17DOI: 10.1109/BTAS.2015.7358750
Oleg V. Komogortsev, Ioannis Rigas
Biometric recognition via eye movement-driven features is an emerging field of research. Eye movement cues are characterized by their non-static nature, the encapsulation of physical and behavioral traits, and the possibility to be recorded in tandem with other modalities, e.g. the iris. The BioEye 2015 competition was organized with the aim to boost the evolution of the eye movement biometrics field. The competition was implemented with a particular focus on the issues facing the researchers in the domain of the eye movement recognition, e.g. quality of the eye movement recordings, different visual stimulus types, and the effect of template aging on the resulting recognition accuracy. This paper describes the details and the results of the BioEye 2015 competition, which provided the largest to date biometric database containing records from 306 subjects, stimulus of two types, and recordings separated by short-time and long-time intervals.
{"title":"BioEye 2015: Competition on biometrics via eye movements","authors":"Oleg V. Komogortsev, Ioannis Rigas","doi":"10.1109/BTAS.2015.7358750","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358750","url":null,"abstract":"Biometric recognition via eye movement-driven features is an emerging field of research. Eye movement cues are characterized by their non-static nature, the encapsulation of physical and behavioral traits, and the possibility to be recorded in tandem with other modalities, e.g. the iris. The BioEye 2015 competition was organized with the aim to boost the evolution of the eye movement biometrics field. The competition was implemented with a particular focus on the issues facing the researchers in the domain of the eye movement recognition, e.g. quality of the eye movement recordings, different visual stimulus types, and the effect of template aging on the resulting recognition accuracy. This paper describes the details and the results of the BioEye 2015 competition, which provided the largest to date biometric database containing records from 306 subjects, stimulus of two types, and recordings separated by short-time and long-time intervals.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134180645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-17DOI: 10.1109/BTAS.2015.7358797
A. Czajka, K. Bowyer
Real-world biometric applications often operate in the context of an identity transaction that allows up to three attempts. That is, if a biometric sample is acquired and if it does not result in a match, the user is allowed to acquire a second sample, and if it again does not result in a match, the user is allowed to acquire a third sample. If the third sample does not result in a match, then the transaction is ended with no match. We report results of an experiment to determine whether or not successive attempts can be considered as independent samples from the same distribution, and whether and how the quality of a biometric sample changes in successive attempts. To our knowledge, this is the first published research to investigate the statistics of multi-attempt biometric transactions. We find that the common assumption that the attempt outcomes come from independent and identically distributed random variables in multi-attempt biometric transactions is incorrect.
{"title":"Statistical evaluation of up-to-three-attempt iris recognition","authors":"A. Czajka, K. Bowyer","doi":"10.1109/BTAS.2015.7358797","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358797","url":null,"abstract":"Real-world biometric applications often operate in the context of an identity transaction that allows up to three attempts. That is, if a biometric sample is acquired and if it does not result in a match, the user is allowed to acquire a second sample, and if it again does not result in a match, the user is allowed to acquire a third sample. If the third sample does not result in a match, then the transaction is ended with no match. We report results of an experiment to determine whether or not successive attempts can be considered as independent samples from the same distribution, and whether and how the quality of a biometric sample changes in successive attempts. To our knowledge, this is the first published research to investigate the statistics of multi-attempt biometric transactions. We find that the common assumption that the attempt outcomes come from independent and identically distributed random variables in multi-attempt biometric transactions is incorrect.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126647723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-17DOI: 10.1109/BTAS.2015.7358786
Evgeniy Abdulin, Oleg V. Komogortsev
The paper presents a reading-based eye movement biometrics model. The model is able to process passages of text and extract metrics that represent the physiological and behavioral aspects of the eye movements in reading. When tested on a database of eye movements from 103 individuals, the model yielded the Equal Error Rate of 10.2%. The proposed method performed better in the template-aging scenario than comparable eye movement-driven biometrics methods.
{"title":"Person verification via eye movement-driven text reading model","authors":"Evgeniy Abdulin, Oleg V. Komogortsev","doi":"10.1109/BTAS.2015.7358786","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358786","url":null,"abstract":"The paper presents a reading-based eye movement biometrics model. The model is able to process passages of text and extract metrics that represent the physiological and behavioral aspects of the eye movements in reading. When tested on a database of eye movements from 103 individuals, the model yielded the Equal Error Rate of 10.2%. The proposed method performed better in the template-aging scenario than comparable eye movement-driven biometrics methods.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121043634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-17DOI: 10.1109/BTAS.2015.7358764
Guangcan Mai, M. Lim, P. Yuen
Biometric cryptosystem has been proven to be one of the promising approaches for template protection. Since most methods in this approach require binary input, to extend it for multiple modalities, binary template fusion is required. This paper addresses the issues of multi-biometrics' performance and security, and proposes a new binary template fusion method which could maximize the fused template discriminability and its entropy by reducing the bits dependency. Three publicly available datasets are used for experiments. Experimental results show that the proposed method outperforms the state of the art methods.
{"title":"Fusing binary templates for multi-biometric cryptosystems","authors":"Guangcan Mai, M. Lim, P. Yuen","doi":"10.1109/BTAS.2015.7358764","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358764","url":null,"abstract":"Biometric cryptosystem has been proven to be one of the promising approaches for template protection. Since most methods in this approach require binary input, to extend it for multiple modalities, binary template fusion is required. This paper addresses the issues of multi-biometrics' performance and security, and proposes a new binary template fusion method which could maximize the fused template discriminability and its entropy by reducing the bits dependency. Three publicly available datasets are used for experiments. Experimental results show that the proposed method outperforms the state of the art methods.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126644588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-17DOI: 10.1109/BTAS.2015.7358763
Abdul Serwadda, V. Phoha, Sujit Poudel, Leanne M. Hirshfield, Danushka Bandara, Sarah E. Bratt, Mark R. Costa
There is a rapidly increasing amount of research on the use of brain activity patterns as a basis for biometric user verification. The vast majority of this research is based on Electroencephalogram (EEG), a technology which measures the electrical activity along the scalp. In this paper, we evaluate Functional Near-Infrared Spectroscopy (fNIRS) as an alternative approach to brain activity-based user authentication. fNIRS is centered around the measurement of light absorbed by blood and, compared to EEG, has a higher signal-to-noise ratio, is more suited for use during normal working conditions, and has a much higher spatial resolution which enables targeted measurements of specific brain regions. Based on a dataset of 50 users that was analysed using an SVM and a Naïve Bayes classifier, we show fNIRS to respectively give EERs of 0.036 and 0.046 when using our best channel configuration. Further, we present some results on the areas of the brain which demonstrated highest discriminative power. Our findings indicate that fNIRS has significant promise as a biometric authentication modality.
{"title":"fNIRS: A new modality for brain activity-based biometric authentication","authors":"Abdul Serwadda, V. Phoha, Sujit Poudel, Leanne M. Hirshfield, Danushka Bandara, Sarah E. Bratt, Mark R. Costa","doi":"10.1109/BTAS.2015.7358763","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358763","url":null,"abstract":"There is a rapidly increasing amount of research on the use of brain activity patterns as a basis for biometric user verification. The vast majority of this research is based on Electroencephalogram (EEG), a technology which measures the electrical activity along the scalp. In this paper, we evaluate Functional Near-Infrared Spectroscopy (fNIRS) as an alternative approach to brain activity-based user authentication. fNIRS is centered around the measurement of light absorbed by blood and, compared to EEG, has a higher signal-to-noise ratio, is more suited for use during normal working conditions, and has a much higher spatial resolution which enables targeted measurements of specific brain regions. Based on a dataset of 50 users that was analysed using an SVM and a Naïve Bayes classifier, we show fNIRS to respectively give EERs of 0.036 and 0.046 when using our best channel configuration. Further, we present some results on the areas of the brain which demonstrated highest discriminative power. Our findings indicate that fNIRS has significant promise as a biometric authentication modality.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116258894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-17DOI: 10.1109/BTAS.2015.7358784
Yu Zhong, Yunbin Deng, Geoffrey S. Meltzner
Accelerometers embedded in mobile devices have shown great potential for non-obtrusive gait biometrics by directly capturing a user's characteristic locomotion. Although gait analysis using these sensors has achieved highly accurate authentication and identification performance under controlled experimental settings, the robustness of such algorithms in the presence of assorted variations typical in real world scenarios remains a major challenge. In this paper, we propose a novel pace independent mobile gait biometrics algorithm that is insensitive to variability in walking speed. Our approach also exploits recent advances in invariant mobile gait representation to be independent of sensor rotation. Performance evaluations on a realistic mobile gait dataset containing 51 subjects confirm the merits of the proposed algorithm toward practical mobile gait authentication.
{"title":"Pace independent mobile gait biometrics","authors":"Yu Zhong, Yunbin Deng, Geoffrey S. Meltzner","doi":"10.1109/BTAS.2015.7358784","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358784","url":null,"abstract":"Accelerometers embedded in mobile devices have shown great potential for non-obtrusive gait biometrics by directly capturing a user's characteristic locomotion. Although gait analysis using these sensors has achieved highly accurate authentication and identification performance under controlled experimental settings, the robustness of such algorithms in the presence of assorted variations typical in real world scenarios remains a major challenge. In this paper, we propose a novel pace independent mobile gait biometrics algorithm that is insensitive to variability in walking speed. Our approach also exploits recent advances in invariant mobile gait representation to be independent of sensor rotation. Performance evaluations on a realistic mobile gait dataset containing 51 subjects confirm the merits of the proposed algorithm toward practical mobile gait authentication.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114492642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-17DOI: 10.1109/BTAS.2015.7358759
Yang Hu, K. Sirlantzis, G. Howells
In this paper, we address the problem of iris recognition under less constrained environment. We propose a novel iris weight map for iris matching stage to improve the robustness of iris recognition to the noise and degradations in less constrained environment. The proposed iris weight map is class specific considering both the bit stability and bit discriminability of iris codes. It is the combination of a stability map and a discriminability map. The stability map focuses on intra-class bit stability, aiming to improve the intra-class matching. It assigns more weight to the bits that are highly consistent with their noiseless estimations which are sought via low rank approximation. The discriminability map models the inter-class bit discriminability. It emphasizes more discriminative bits in iris codes to improve the inter-class separation via a 1-to-N strategy. The experimental results demonstrate that the proposed iris weight map achieves improved identification and verification performance compared to state-of-the-art algorithms on publicly available datasets.
{"title":"Exploiting stable and discriminative iris weight map for iris recognition under less constrained environment","authors":"Yang Hu, K. Sirlantzis, G. Howells","doi":"10.1109/BTAS.2015.7358759","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358759","url":null,"abstract":"In this paper, we address the problem of iris recognition under less constrained environment. We propose a novel iris weight map for iris matching stage to improve the robustness of iris recognition to the noise and degradations in less constrained environment. The proposed iris weight map is class specific considering both the bit stability and bit discriminability of iris codes. It is the combination of a stability map and a discriminability map. The stability map focuses on intra-class bit stability, aiming to improve the intra-class matching. It assigns more weight to the bits that are highly consistent with their noiseless estimations which are sought via low rank approximation. The discriminability map models the inter-class bit discriminability. It emphasizes more discriminative bits in iris codes to improve the inter-class separation via a 1-to-N strategy. The experimental results demonstrate that the proposed iris weight map achieves improved identification and verification performance compared to state-of-the-art algorithms on publicly available datasets.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"90 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128308071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}