Pub Date : 2019-01-01DOI: 10.1109/WACVW.2019.00018
Haiying Guan, Mark Kozak, Eric Robertson, Yooyoung Lee, Amy N. Yates, Andrew Delgado, Daniel Zhou, Timothée Kheyrkhah, Jeff M. Smith, J. Fiscus
We provide a benchmark for digital Media Forensics Challenge (MFC) evaluations. Our comprehensive data comprises over 176,000 high provenance (HP) images and 11,000 HP videos; more than 100,000 manipulated images and 4,000 manipulated videos; 35 million internet images and 300,000 video clips. We have designed and generated a series of development, evaluation, and challenge datasets, and used them to assess the progress and thoroughly analyze the performance of diverse systems on a variety of media forensics tasks in the past two years. In this paper, we first introduce the objectives, challenges, and approaches to building media forensics evaluation datasets. We then discuss our approaches to forensic dataset collection, annotation, and manipulation, and present the design and infrastructure to effectively and efficiently build the evaluation datasets to support various evaluation tasks. Given a specified query, we build an infrastructure that selects the customized evaluation subsets for the targeted analysis report. Finally, we demonstrate the evaluation results in the past evaluations.
{"title":"MFC Datasets: Large-Scale Benchmark Datasets for Media Forensic Challenge Evaluation","authors":"Haiying Guan, Mark Kozak, Eric Robertson, Yooyoung Lee, Amy N. Yates, Andrew Delgado, Daniel Zhou, Timothée Kheyrkhah, Jeff M. Smith, J. Fiscus","doi":"10.1109/WACVW.2019.00018","DOIUrl":"https://doi.org/10.1109/WACVW.2019.00018","url":null,"abstract":"We provide a benchmark for digital Media Forensics Challenge (MFC) evaluations. Our comprehensive data comprises over 176,000 high provenance (HP) images and 11,000 HP videos; more than 100,000 manipulated images and 4,000 manipulated videos; 35 million internet images and 300,000 video clips. We have designed and generated a series of development, evaluation, and challenge datasets, and used them to assess the progress and thoroughly analyze the performance of diverse systems on a variety of media forensics tasks in the past two years. In this paper, we first introduce the objectives, challenges, and approaches to building media forensics evaluation datasets. We then discuss our approaches to forensic dataset collection, annotation, and manipulation, and present the design and infrastructure to effectively and efficiently build the evaluation datasets to support various evaluation tasks. Given a specified query, we build an infrastructure that selects the customized evaluation subsets for the targeted analysis report. Finally, we demonstrate the evaluation results in the past evaluations.","PeriodicalId":254512,"journal":{"name":"2019 IEEE Winter Applications of Computer Vision Workshops (WACVW)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116876412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-01DOI: 10.1109/WACVW.2019.00023
Nisha Srinivas, Matthew Hivner, Kevin Gay, Harleen Atwal, Michael A. King, K. Ricanek
In this work we update the body of knowledge on the performance of child face recognition against a set of commercial-off-the-shelf (COTS) algorithms as well as a set of government sponsored algorithms. In particular, this work examines performance of multiple deep learning face recognition systems (8 distinct solutions) establishing a performance base line for a publicly available child dataset. Furthermore, we examine the phenomenon of gender bias as a function of match performance across the eight (8) systems. This work highlights the continued challenge that exists for child face recognition as a function of aging. Rank-1 accuracy ranges from 0.44 to 0.78 with an average accuracy of 0.63 on a dataset of 745 unique subjects (7,990 total images). Furthermore, when we introduce a distractor set of approximately 10; 000 child faces the rank-1 accuracy decreases across all systems on an average of 10 points. Additionally, the phenomenon of gender bias is exhibited across all systems, although the developers of the face recognition systems claim a near balance of genders was used in the development. The question of gender disparity is elusive, and although co-factors such as makeup, expression, and hair were not explicitly controlled, the dataset does not contain substantial differences across the genders. This work contributes to the body of knowledge in multiple categories, 1. child face recognition, 2. gender bias for face recognition and the notion that females as a sub-population may exhibit Lamb characteristics according to Doddington's Biometric Zoo, and 3. a dataset for child face recognition.
{"title":"Exploring Automatic Face Recognition on Match Performance and Gender Bias for Children","authors":"Nisha Srinivas, Matthew Hivner, Kevin Gay, Harleen Atwal, Michael A. King, K. Ricanek","doi":"10.1109/WACVW.2019.00023","DOIUrl":"https://doi.org/10.1109/WACVW.2019.00023","url":null,"abstract":"In this work we update the body of knowledge on the performance of child face recognition against a set of commercial-off-the-shelf (COTS) algorithms as well as a set of government sponsored algorithms. In particular, this work examines performance of multiple deep learning face recognition systems (8 distinct solutions) establishing a performance base line for a publicly available child dataset. Furthermore, we examine the phenomenon of gender bias as a function of match performance across the eight (8) systems. This work highlights the continued challenge that exists for child face recognition as a function of aging. Rank-1 accuracy ranges from 0.44 to 0.78 with an average accuracy of 0.63 on a dataset of 745 unique subjects (7,990 total images). Furthermore, when we introduce a distractor set of approximately 10; 000 child faces the rank-1 accuracy decreases across all systems on an average of 10 points. Additionally, the phenomenon of gender bias is exhibited across all systems, although the developers of the face recognition systems claim a near balance of genders was used in the development. The question of gender disparity is elusive, and although co-factors such as makeup, expression, and hair were not explicitly controlled, the dataset does not contain substantial differences across the genders. This work contributes to the body of knowledge in multiple categories, 1. child face recognition, 2. gender bias for face recognition and the notion that females as a sub-population may exhibit Lamb characteristics according to Doddington's Biometric Zoo, and 3. a dataset for child face recognition.","PeriodicalId":254512,"journal":{"name":"2019 IEEE Winter Applications of Computer Vision Workshops (WACVW)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116345366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-01DOI: 10.1109/WACVW.2019.00020
Falko Matern, C. Riess, M. Stamminger
High quality face editing in videos is a growing concern and spreads distrust in video content. However, upon closer examination, many face editing algorithms exhibit artifacts that resemble classical computer vision issues that stem from face tracking and editing. As a consequence, we wonder how difficult it is to expose artificial faces from current generators? To this end, we review current facial editing methods and several characteristic artifacts from their processin pipelines. We also show that relatively simple visual artifacts can be already quite effective in exposing such manipulations, including Deepfakes and Face2Face. Since the methods are based on visual features, they are easily explicable also to non-technical experts. The methods are easy to implement and offer capabilities for rapid adjustment to new manipulation types with little data available. Despite their simplicity, the methods are able to achieve AUC values of up to 0.866.
{"title":"Exploiting Visual Artifacts to Expose Deepfakes and Face Manipulations","authors":"Falko Matern, C. Riess, M. Stamminger","doi":"10.1109/WACVW.2019.00020","DOIUrl":"https://doi.org/10.1109/WACVW.2019.00020","url":null,"abstract":"High quality face editing in videos is a growing concern and spreads distrust in video content. However, upon closer examination, many face editing algorithms exhibit artifacts that resemble classical computer vision issues that stem from face tracking and editing. As a consequence, we wonder how difficult it is to expose artificial faces from current generators? To this end, we review current facial editing methods and several characteristic artifacts from their processin pipelines. We also show that relatively simple visual artifacts can be already quite effective in exposing such manipulations, including Deepfakes and Face2Face. Since the methods are based on visual features, they are easily explicable also to non-technical experts. The methods are easy to implement and offer capabilities for rapid adjustment to new manipulation types with little data available. Despite their simplicity, the methods are able to achieve AUC values of up to 0.866.","PeriodicalId":254512,"journal":{"name":"2019 IEEE Winter Applications of Computer Vision Workshops (WACVW)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127756146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-01DOI: 10.1109/WACVW.2019.00021
Emanuela Marasco, S. Cando, Larry L Tang
Fingerprint liveness detection has been widely discussed as a solution for addressing the vulnerability of fingerprint recognition systems to presentation attacks. Multiple algorithms have been designed and implemented to operate on images acquired with commercial sensors, but such methodology is not currently available for latent prints. The possibility of wrongful conviction from fake latent evidence is reasonable, since spoof finger marks can be realistically planted at a crime scene. This paper discusses concerns pertaining to spoofing friction ridges with the purpose of leaving fake marks to contaminate the evidence associated with the investigation of a crime. There is no prior literature on liveness detection from latent prints acquired from crime scene. We illustrate the need to address such threat by experimentally evaluating the existing liveness detection approaches on latent fingerprints. This study allow us to gain a deeper understanding of the advantages and disadvantages of the existing methods, and presents a novel research direction focused on investigating the effectiveness of existing countermeasures against the danger of spoofed marks. In particular, we evaluate texture-based detectors initially developed for automatic fingerprint systems and deep convolution neural networks. The experiments are carried out on the NIST SD27 latent fingerprints database.
{"title":"Can Liveness Be Automatically Detected from Latent Fingerprints?","authors":"Emanuela Marasco, S. Cando, Larry L Tang","doi":"10.1109/WACVW.2019.00021","DOIUrl":"https://doi.org/10.1109/WACVW.2019.00021","url":null,"abstract":"Fingerprint liveness detection has been widely discussed as a solution for addressing the vulnerability of fingerprint recognition systems to presentation attacks. Multiple algorithms have been designed and implemented to operate on images acquired with commercial sensors, but such methodology is not currently available for latent prints. The possibility of wrongful conviction from fake latent evidence is reasonable, since spoof finger marks can be realistically planted at a crime scene. This paper discusses concerns pertaining to spoofing friction ridges with the purpose of leaving fake marks to contaminate the evidence associated with the investigation of a crime. There is no prior literature on liveness detection from latent prints acquired from crime scene. We illustrate the need to address such threat by experimentally evaluating the existing liveness detection approaches on latent fingerprints. This study allow us to gain a deeper understanding of the advantages and disadvantages of the existing methods, and presents a novel research direction focused on investigating the effectiveness of existing countermeasures against the danger of spoofed marks. In particular, we evaluate texture-based detectors initially developed for automatic fingerprint systems and deep convolution neural networks. The experiments are carried out on the NIST SD27 latent fingerprints database.","PeriodicalId":254512,"journal":{"name":"2019 IEEE Winter Applications of Computer Vision Workshops (WACVW)","volume":"217 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114853113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-01DOI: 10.1109/WACVW.2019.00013
Tae Soo Kim, Michael Peven, Weichao Qiu, A. Yuille, Gregory Hager
We examine the problem of activity recognition in video using simulated data for training. In contrast to the expensive task of obtaining accurate labels from real data, synthetic data creation is not only fast and scalable, but provides ground-truth labels for more than just the activities of interest, including segmentation masks, 3D object keypoints, and more. We aim to successfully transfer a model trained on synthetic data to work on video in the real world. In this work, we provide a method of transferring from synthetic to real at intermediate representations of a video. We wish to perform activity recognition from the low-dimensional latent representation of a scene as a collection of visual attributes. As the ground-truth data does not exist in the ActEV dataset for attributes of interest, specifically orientation of cars in the ground-plane with respect to the camera, we synthesize this data. We show how we can successfully transfer a car orientation classifier, and use its predictions in our defined set of visual attributes to classify actions in video.
{"title":"Synthesizing Attributes with Unreal Engine for Fine-grained Activity Analysis","authors":"Tae Soo Kim, Michael Peven, Weichao Qiu, A. Yuille, Gregory Hager","doi":"10.1109/WACVW.2019.00013","DOIUrl":"https://doi.org/10.1109/WACVW.2019.00013","url":null,"abstract":"We examine the problem of activity recognition in video using simulated data for training. In contrast to the expensive task of obtaining accurate labels from real data, synthetic data creation is not only fast and scalable, but provides ground-truth labels for more than just the activities of interest, including segmentation masks, 3D object keypoints, and more. We aim to successfully transfer a model trained on synthetic data to work on video in the real world. In this work, we provide a method of transferring from synthetic to real at intermediate representations of a video. We wish to perform activity recognition from the low-dimensional latent representation of a scene as a collection of visual attributes. As the ground-truth data does not exist in the ActEV dataset for attributes of interest, specifically orientation of cars in the ground-plane with respect to the camera, we synthesize this data. We show how we can successfully transfer a car orientation classifier, and use its predictions in our defined set of visual attributes to classify actions in video.","PeriodicalId":254512,"journal":{"name":"2019 IEEE Winter Applications of Computer Vision Workshops (WACVW)","volume":"15 27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124255103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-01DOI: 10.1109/WACVW.2019.00008
Yooyoung Lee, J. Fiscus, A. Godil, David Joy, Andrew Delgado, Jim Golden
Video analytic technologies that are able to detect and classify activity are crucial for applications in many domains, such as transportation and public safety. In spite of many data collection efforts and benchmark studies in the computer vision community, there has been a lack of system development that meets practical needs for such specific domain applications. In this paper, we introduce the Activities in Extended Video (ActEV) challenge to facilitate development of video analytic technologies that can automatically detect target activities, and identify and track objects associated with each activity. To benchmark the performance of currently available algorithms, we initiated the ActEV’18 activity-level evaluation along with reference segmentation and leaderboard evaluations. In this paper, we present a summary of results and findings from these evaluations. Fifteen teams from academia and industry participated in the ActEV18 evaluations using 19 activities from the VIRAT V1 dataset.
{"title":"ActEV18: Human Activity Detection Evaluation for Extended Videos","authors":"Yooyoung Lee, J. Fiscus, A. Godil, David Joy, Andrew Delgado, Jim Golden","doi":"10.1109/WACVW.2019.00008","DOIUrl":"https://doi.org/10.1109/WACVW.2019.00008","url":null,"abstract":"Video analytic technologies that are able to detect and classify activity are crucial for applications in many domains, such as transportation and public safety. In spite of many data collection efforts and benchmark studies in the computer vision community, there has been a lack of system development that meets practical needs for such specific domain applications. In this paper, we introduce the Activities in Extended Video (ActEV) challenge to facilitate development of video analytic technologies that can automatically detect target activities, and identify and track objects associated with each activity. To benchmark the performance of currently available algorithms, we initiated the ActEV’18 activity-level evaluation along with reference segmentation and leaderboard evaluations. In this paper, we present a summary of results and findings from these evaluations. Fifteen teams from academia and industry participated in the ActEV18 evaluations using 19 activities from the VIRAT V1 dataset.","PeriodicalId":254512,"journal":{"name":"2019 IEEE Winter Applications of Computer Vision Workshops (WACVW)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121348905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-01DOI: 10.1109/WACVW.2019.00009
L. Yao, Ying Qian
Due to participation in TRECVID ActEV[1] competition, we conduct research on temporal activity recognition. In this paper, we propose a system for activity detection and localize detected activities temporally in extended videos. Our system firstly detects objects in video frames. Secondly, we use position information of detected object, as input to the object tracking model, which can obtain motion information of multiple objects in consecutive frames. Lastly, we input consecutive video frames containing only detected objects into 3D Convolutional Neural Network to achieve features, and 3D CNN is followed by a recurrent neural network for accurately localizing the detected activity.
{"title":"Novel Activities Detection Algorithm in Extended Videos","authors":"L. Yao, Ying Qian","doi":"10.1109/WACVW.2019.00009","DOIUrl":"https://doi.org/10.1109/WACVW.2019.00009","url":null,"abstract":"Due to participation in TRECVID ActEV[1] competition, we conduct research on temporal activity recognition. In this paper, we propose a system for activity detection and localize detected activities temporally in extended videos. Our system firstly detects objects in video frames. Secondly, we use position information of detected object, as input to the object tracking model, which can obtain motion information of multiple objects in consecutive frames. Lastly, we input consecutive video frames containing only detected objects into 3D Convolutional Neural Network to achieve features, and 3D CNN is followed by a recurrent neural network for accurately localizing the detected activity.","PeriodicalId":254512,"journal":{"name":"2019 IEEE Winter Applications of Computer Vision Workshops (WACVW)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114861208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-01DOI: 10.1109/WACVW.2019.00024
Denton Bobeldyk, A. Ross
In this work, we investigate the possibility of extracting soft biometric attributes, viz., gender, race and eye color, from down-sampled near-infrared ocular images. In particular, we evaluate the possibility of deducing gender, race and eye color from ocular images as small as 56 pixels. Our preliminary analysis yields the surprising result that gender, race and eye color cues are still available in such low-resolution near-infrared images. This research bolsters the previously made assertion in the literature that certain soft biometric attributes can be deduced from poor quality biometric data.
{"title":"Predicting Soft Biometric Attributes from 30 Pixels: A Case Study in NIR Ocular Images","authors":"Denton Bobeldyk, A. Ross","doi":"10.1109/WACVW.2019.00024","DOIUrl":"https://doi.org/10.1109/WACVW.2019.00024","url":null,"abstract":"In this work, we investigate the possibility of extracting soft biometric attributes, viz., gender, race and eye color, from down-sampled near-infrared ocular images. In particular, we evaluate the possibility of deducing gender, race and eye color from ocular images as small as 56 pixels. Our preliminary analysis yields the surprising result that gender, race and eye color cues are still available in such low-resolution near-infrared images. This research bolsters the previously made assertion in the literature that certain soft biometric attributes can be deduced from poor quality biometric data.","PeriodicalId":254512,"journal":{"name":"2019 IEEE Winter Applications of Computer Vision Workshops (WACVW)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134081592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-12DOI: 10.1109/WACVW.2019.00022
Akbir Khan, M. Mahmoud
As biometric applications are fielded to serve large population groups, issues of performance differences between individual sub-groups are becoming increasingly important. In this paper we examine cases where we believe race is one such factor. We look in particular at two forms of problem; facial classification and image synthesis. We take the novel approach of considering race as a boundary for transfer learning in both the task (facial classification) and the domain (synthesis over distinct datasets). We demonstrate a series of techniques to improve transfer learning of facial classification; outperforming similar models trained in the target's own domain. We conduct a study to evaluate the performance drop of Generative Adversarial Networks trained to conduct image synthesis, in this process, we produce a new annotation for the Celeb-A dataset by race. These networks are trained solely on one race and tested on another - demonstrating the subsets of the CelebA to be distinct domains for this task.
{"title":"Considering Race a Problem of Transfer Learning","authors":"Akbir Khan, M. Mahmoud","doi":"10.1109/WACVW.2019.00022","DOIUrl":"https://doi.org/10.1109/WACVW.2019.00022","url":null,"abstract":"As biometric applications are fielded to serve large population groups, issues of performance differences between individual sub-groups are becoming increasingly important. In this paper we examine cases where we believe race is one such factor. We look in particular at two forms of problem; facial classification and image synthesis. We take the novel approach of considering race as a boundary for transfer learning in both the task (facial classification) and the domain (synthesis over distinct datasets). We demonstrate a series of techniques to improve transfer learning of facial classification; outperforming similar models trained in the target's own domain. We conduct a study to evaluate the performance drop of Generative Adversarial Networks trained to conduct image synthesis, in this process, we produce a new annotation for the Celeb-A dataset by race. These networks are trained solely on one race and tested on another - demonstrating the subsets of the CelebA to be distinct domains for this task.","PeriodicalId":254512,"journal":{"name":"2019 IEEE Winter Applications of Computer Vision Workshops (WACVW)","volume":"247 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115187462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}